About the Author(s)


Proscovia N. Ssentamu Email
Uganda Management Institute, Kampala, Uganda

Citation


Ssentamu, P.N., 2018, ‘Good practices, drawbacks and improvement strategies in external peer monitoring and evaluation: A case of Uganda National Council for Higher Education’, African Evaluation Journal 6(1), a261. https://doi.org/10.4102/aej.v6i1.261

Original Research

Good practices, drawbacks and improvement strategies in external peer monitoring and evaluation: A case of Uganda National Council for Higher Education

Proscovia N. Ssentamu

Received: 08 Aug. 2017; Accepted: 18 May 2018; Published: 30 July 2018

Copyright: © 2018. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Growing demand for higher education by national governments and their citizens, and the growth of public and private higher education institutions resulting from increased enrolment have augmented the demand for monitoring and evaluation (M&E). Consequently, the National Council for Higher Education in Uganda was established and mandated to among others monitor, evaluate and regulate higher education institutions.

Objectives: To explore good practices, drawbacks and improvement strategies in the external peer M&E of higher education institutions.

Method: Using the qualitative research design, data were collected from 15 peers invited by the Council to participate in external M&E visits to higher education institutions.

Results: Several categories of good external peer M&E practices and drawbacks emerged including statutory provisions for the external M&E exercise by the Council; purpose, planning and capacity for undertaking external M&E activities; involvement of peers and professional bodies; and political and legal interference.

Conclusion: Despite availability of an M&E framework and involvement of peers, the current external M&E model is centralised, bureaucratic and summative and therefore generally not supportive of continuous institutional improvement based on feedback from M&E visits. The current Higher Education Law should be amended; the Council M&E framework and practices should be periodically reviewed to match trends and needs, a gradual shift from compliance to participatory and performance-based M&E, and creation of a good policy environment to nurture the growth and development of institutional self-monitoring and evaluation mechanisms geared towards a culture of continuous self-improvement.

Introduction

The 1998 World Declaration on Higher Education (HE) called for a major effort to improve the delivery of HE globally (UNESCO 2009b:iv). It was formulated among others because of the important contribution of HE to the improvement of the social, cultural, political, economic and environmental facets of the global society. This implies that if a country aims to provide its citizens with an improved quality of life, it must ensure a high quality HE system through teaching, research, advanced employment and service (UNESCO 2009b). Recently, the Sustainable Development Goal Four (SDG4) requires countries to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all. Specific to HE, SGD target 4.3 requires that by 2030 countries should ensure equal access for all women and men to affordable quality technical, vocational and tertiary education, including university. However, the developing world is still grappling with quality HE, and yet, a commitment by governments to quality HE is instrumental in promoting the progress towards the achievement of all the development goals, that is, poverty reduction, hunger eradication, improved health, gender equality and women’s empowerment, sustainable production and consumption, resilient cities, and more equal and inclusive societies.

The growing demand for HE by national governments and their citizens, and the growth of both public and private higher education institutions (HEIs) have raised demand for accountability by stakeholders and consequently the emergence of new and strengthening of existing M&E mechanisms to manage the HE subsector. By 1986, Uganda had one university (Makerere) with about 10 000 students (Report of the Visitation Committee on Makerere University 2016). To date, there are 50 universities recognised by Uganda National Council for Higher Education (from now on referred to as either as NCHE or Council). Of these, 32 universities are private, implying that about 80% of university education is managed by the private sector. Although this shift points to easy access to an expanded HE subsector, it raises quality concerns. This is coupled by the dwindling Uganda government expenditure on HE standing at less than 12% in Financial Year 2016–2017 down from 16.2% in 2009–2010 (Ministry of Finance, Planning and Economic Development 2015). The current HE budget is a far cry below the 20% recommendation in the Uganda Government White Paper on Public Universities (2008) and the average 18.4% government expenditure on HE in most African countries (Africa-America Institute 2015). According to the report by the Africa-America Institute (2015), the return on investment on HE which has been on the rise and stood at 21% was rated to be the highest globally. Despite this positive continental outlook, and despite the rapid expansion of HE and increased student enrolment, the capacity, and therefore quality, of both public and private HEIs in Uganda is generally constrained by financial resources, human resources, physical infrastructure and equipment.

In response to the quality concern, NCHE was among others established and mandated to monitor, evaluate and regulate accredited HEIs (Universities and Other Tertiary Institutions Act 2001, Section 5g as amended, from now referred to as the Higher Education Act). Further, the Quality Assurance Framework and the Licensing Process for HEIs (NCHE 2014) state:

The responsibility of the NCHE is to establish value-adding systems of external evaluation, which can validate institutional information on effectiveness of internal quality arrangements. The NCHE will use peer and expert reviews to conduct external audits in a regular cycle of audits or whenever these become necessary. (p. 12)

Council therefore undertakes external M&E visits to public and private HEIs. A range of areas are considered during such visits, including land, governance, infrastructure, programmes, staffing, student admission, teaching and learning processes, and financial health (NCHE 2014). To support the fulfilment of this mandate, NCHE selects members from its database and appoints them to visit the identified institutions. The teams vary in composition depending on the type of institution to be visited and the expertise required. This study explored good practices, drawbacks and improvement strategies in the external peer M&E of HEIs by the NCHE.

Research questions

The study was guided by three research questions:

  • What are the good external peer M&E practices?
  • What are the drawbacks in the external peer M&E practices?
  • How can the external peer M&E practices be improved?

The article is structured under the following sections: background to external peer M&E by the NCHE, presentation and discussion of findings, lessons learnt and recommendations.

Background to external peer monitoring and evaluation of higher education institutions by National Council for Higher Education

External peer monitoring and evaluation in the context of National Council for Higher Education

In the context of quality in HE, external peer M&E, also referred to as ‘audit’, is a process of examining what goes on in HEIs to ensure compliance with quality assurance procedures, integrity, standards and outcomes (NCHE 2014:2). ‘Quality assessment’ is the external assessment by peers of the quality of teaching and learning through the scrutiny of institutional documentation and student work, direct observation, interview and reference to performance indicators (NCHE 2014:2). Therefore, external peer M&E is the process of examining an institution to ensure compliance with the NCHE quality assurance procedures, integrity, standards and outcomes by peers appointed by NCHE.

Council was established under the Higher Education Act and its subsequent amendments to among others monitor, evaluate and regulate institutions of higher learning in Uganda (Higher Education Act; Section 5g). Since its inception in 2001, NCHE has undertaken M&E visits to various HEIs in relation to its mandate. The key ingredients of these external peer M&E exercises include the purpose, scope, tools and outcomes of such visits as highlighted below.

Purpose of the external peer monitoring and evaluation exercise

There are three purposes: for a provisional licence, for a charter and for verification or routine visits (Higher Education Act 2001). Therefore, the process of external M&E starts right at the inception of proposing a university project and is continuous throughout the life history of a university.

Scope of the external peer monitoring and evaluation

The following are the focus areas for the external peer M&E visits:

  • land
  • governance
  • infrastructure
  • academic staff establishment
  • education facilities
  • financial health
  • student enrolment
  • academic programmes
  • strategic plan
  • research and publications (NCHE 2014:34–38).
Data collection methods

A variety of methods are used during the external M&E visits including:

  • face-to-face interviews with individual members of the management team
  • separate focus group discussion with university management team, staff and students
  • on-site tours and observations
  • verification of required documents.

Guided by NCHE, the methods are aligned to the purpose and scope of the external M&E visit. From this brief background, the presence of a formal external M&E policy and practice is evident.

Conceptual overview

According to the UNESCO Report (2009a:32), monitoring is an ongoing function that uses the systematic collection of data related to specified indicators to provide management and stakeholders of a development intervention with indications of the extent of progress and achievement with regard to expected results and progress in the use of allocated funds. Evaluation is a systematic and objective assessment of an ongoing or completed policy, programme or project, its design, implementation and results (UNESCO 2009a:32).

The contribution of M&E is multifarious including provision of timely assessments of the relevance, efficiency, effectiveness, impact and sustainability of interventions and overall progress against original objectives (UNESCO 2009a:32); supporting the improvement of performance and achievement of results; and improvement of management of outputs, outcomes and impact. In the World Bank Handbook for Development Practitioners (2004:xi), M&E is a powerful public management tool used to improve the way governments or organisations achieve results based on good performance feedback systems. The process enables organisations to systematically track implementation outputs and measure the effectiveness of projects and programmes (UNDP 2013). Therefore, M&E is an essential part of good management practice and integral to the day-to-day management of results.

In this study, external peer M&E is conceptualised as the systematic collection, analysis and assessment of evidence using national performance indicators to facilitate judgement on the relevance, effectiveness and efficiency of the performance of HEIs in their core functions and promote improvement. It enables checking and examining of a HEI to ensure compliance with predetermined quality assurance procedures, integrity, standards and outcomes by peers appointed by NCHE. Peers or expert colleagues working within similar or related HEIs and academic disciplines are identified by NCHE to support various external M&E activities in HEIs. Care is taken to ensure that the peer evaluators are not staff of, and have no conflict of interest in, the HEI visited. According to Altbach et al. (2009), peer evaluation rather than the traditional evaluation by government authorities has become a pattern for evaluating HE.

Methodology

The study used the qualitative research design to collect data, analyse and interpret the findings. Data were collected using an unstructured and open-ended questionnaire to allow participants to freely express their opinions (Sekaran 2003). Additional data were collected through the review of relevant NCHE documents.

Having obtained research clearance, the study used the snowball sampling technique. Snowballing enables a researcher to make initial contact with a small group of people who are relevant to the research topic and uses them as referrals to contact others (Rahi 2017). The initial contact obtained from NCHE provided two contacts who also provided referral contacts of peers they had undertaken NCHE M&E assignments with. This went on until the people contacted later on provided the contacts of the people already provided. At this saturation point, the researcher stopped developing the contact list. Using their emails, the 21 peers were requested to participate in the study. Upon acceptance, the questionnaire was emailed to them. For ethical considerations, the questionnaire had a statement to the effect that their personal contact details would be strictly held in confidence and the data provided used only for purposes of this study.

Of the 21 sampled participants (henceforth referred to as peers), 15 returned the completed questionnaire. The majority (13) of the peers’ monitors and evaluators were active HE practitioners (professors, senior lecturers and top university administrators). The sampling of homogeneous participants using the snowball technique enabled the focus on depth and similarities, and narrowing of the range of variation in responses on external peer M&E. The completed questionnaires were downloaded, sorted and the responses clustered to identify emerging issues and develop themes.

Ethical considerations

Ethical clearance was granted by the Institute Research Centre with reference number: UMIR26/4.

Presentation of findings

Good monitoring and evaluation practices by the National Council for Higher Education identified by peers

Four major categories of good external M&E practices of HEIs by the NCHE emerged, that is, the statutory provisions for external M&E, the purpose, involvement of external M&E peers and professional bodies and the M&E process. The findings are presented and analysed below.

Statutory provisions for the external monitoring and evaluation exercise by the Council

Twelve out of the 15 peers noted that availability of statutory provisions, approved guidelines and checklists guided the M&E exercise. This observation is summed up by one of the peers as follows:

‘NCHE published a booklet under the title: ‘Quality Assurance Framework for Universities and the Licensing Process for Higher Education Institutions’. This Booklet is a good guide for both the Institutions and the team that goes to carry out external M&E in Institutions. It covers the key areas that must be monitored and evaluated in every Institution.’ (Participant PUV1, Male, Professor, Former Vice Chancellor)

The several easy-to-access statutory provisions and obligations, including the Higher Education Act, 2001, provide the minimum standards by which NCHE legally undertakes external M&E of HEIs in Uganda.

The purpose of external monitoring and evaluation by the National Council for Higher Education

The peers appreciated the purpose of the external M&E by NCHE. Regarding the HE Act, the peers noted that external M&E was undertaken to ensure that institutions meet the minimum standards to be granted letters of interim authority, provisional license or charters. From the findings, NCHE is also mandated to periodically visit the chartered HEIs to ensure that quality standards are maintained, for instance that institutions conform to the law, have adequate and quality inputs (accredited programmes, qualified staff and students, suitable and adequate facilities and equipment), and engage in quality teaching and learning, research and community service.

A peer noted that Council receives and investigates complaints from HEIs, for instance the verification of students’ academic credentials the institutions are not sure of during the application process. However, key in the process is the provision of feedback as illustrated in the following observation by a peer: Council provides ‘feedback to the institutions monitored and evaluated with the attendant recommendations and points of action’. (Participant PN1, Male, PhD) The purpose of an M&E activity cannot be brought to conclusion until timely feedback is provided and acted upon to improve performance.

Although the purpose for which external M&E is undertaken is crucial, and peers are involved in the exercise, accountability seems to be the ultimate purpose. As the M&E exercise emanates from outside the institutions, it is top–down and therefore perceived as bureaucratic.

Involvement of external peers and professional bodies

All the 15 peers noted that the inclusion of professional bodies and subject matter experts on external M&E teams is a good practice. A peer noted that the use of peers provides an independent academic and professional opinion, which could have been lost if the team solely comprised NCHE members. The involvement of professional bodies in the M&E is good practice because the professional bodies are the final recipients of the graduates.

The following excerpts from the findings illustrate some of the rewards from this practice:

‘The Council selects a pool of monitors and evaluators from higher education institutions … this makes the monitoring and evaluation of the institutions publically owned.’ (Participant PU9, Male, PhD)

‘The composition of the team going to undertake M&E is accompanied by representatives from the Professional Councils … depending on the Programmes accredited by NCHE to be offered by the institution being visited.’ (Participant PUV1, Male, Professor, Former Vice Chancellor)

Findings also show that the selection of the team of experts by Council is a rigorous process. Advertisements are sent out in the media inviting senior managers and educationists from various HEIs and professions to apply. After selection, the staff are oriented in institutional and programme accreditation procedures. These form a pool of experts from which NCHE selects depending on the M&E activity at hand. Before being assigned to visit an institution, the experts are required to declare potential conflict of interest.

The external monitoring and evaluation process by National Council for Higher Education

Fourteen of the 15 peers considered the external M&E process good practice. Below are compiled and paraphrased responses providing the stepwise procedure used:

  • Institutions are informed in advance of the intended Council visit on the M&E exercise.
  • Council selects and assembles the visitation teams to undertake the M&E exercise. The selection is based on qualifications, level of seniority, expertise, relevant skills and experience.
  • The team meets with technical staff from Council to:
    • Receive a briefing on the visitation protocol, for example activities to be carried out and expected outcomes of the visit.
    • Review Council forms completed by the host institution. The completed forms contain all the institutional information vital for the exercise. This enables the team opportunity to study the institution and plan the visiting strategy.
    • Review relevant documents.
  • During the visit, the team undertakes the following activities:
    • Holds meetings with the stakeholders in the institution. These include top management, student leaders, and teaching, administrative, and support staff.
    • Reviews the documents presented for clarity.
    • Tours the institution to observe the facilities and equipment, based on the Councils’ standard ratios.
    • Assesses and analyses the information obtained from the institution.
    • Documents key findings.
    • Holds a debriefing meeting with top management to discuss key findings.
  • After the visit, the team writes and submits the final report to Council. The final report must be approved and endorsed by all teammates.
  • Council receives and discusses the report.
  • Upon endorsement, the report is sent to the institution to address issues raised. By law, Council issues a Notice of Intention to revoke its license if the institution fails to remedy its weaknesses within 6 months.

A successful M&E activity cannot be better than its process; its quality is hinged on the selection of the peers to undertake the M&E activity, the level of preparation of the institutions being visited, the M&E schedule, strategies and tools used in data collection, report writing and feedback to the stakeholders. The above findings show that although the external M&E process is top–down, it is transparent and participatory.

Clearly, the current external M&E has several good practices, which should be maintained.

Drawbacks in external monitoring and evaluation

Several drawbacks emerged from the findings as presented and analysed below.

A lacuna in the law and guidelines on external monitoring and evaluation

Ten peers were in agreement on this and this is summarised by a peer:

‘The capacity indicators/benchmarks do not take into consideration certain factors e.g. when computing space per student for the library, computer laboratory and lecture rooms, there is an assumption that student use these facilities at the same time.’ (Participant PCN, Male, Research Consultant)

Over 10 Statutory Instruments were drafted between 2005 and 2008, implying a need to revise them with the rapidly changing HE context. For instance, as stipulated in the Quality Assurance Framework and the Licensing Process for Higher Education Institutions (NCHE 2016), land acreage is no longer a viable capacity indicator, because some HEIs are in locations where building upwards is the only practical option. The current measurements regarding infrastructure such as libraries, laboratories and lecture rooms are also no longer viable because most HEIs are investing in digital libraries, virtual labs and online or e-learning approaches. In addition, virtual universities are now a burgeoning reality and these cannot be subjected to the same M&E procedures and processes characteristic of brick-and-mortar traditional universities.

Lack of criteria for selecting experts for the M&E assignments and use of common benchmarks, regardless of peculiarities found in individual HEIs, were also cited.

This implies the need to periodically review the policy frameworks taking into consideration the dynamism in education and in M&E practices. It also implies making the policy frameworks flexible to allow for creativity and innovation, the frameworks providing the basic standards within which external M&E activities should take place.

Inadequate planning for external monitoring and evaluation activities by the National Council for Higher Education

Although majority (9/15) of the peers highlighted the external M&E process as a good practice, some of them noted inadequate planning of the exercises by Council, including invitation of the teams at short notice and untimely delivery of relevant documentation. A peer summarised the inadequate planning as follows:

‘Untimely delivery of literature (application, previous report visits, etc.) about the Institution to the experts’ team scheduled to conduct the monitoring and evaluation.’ (Participant PC4, Male, PhD, Consultant)

Planning is key in any M&E exercise regardless purpose and size. However, the invitation of experts at short notice affects the quality of the M&E exercises and reports. This could imply that the external M&E exercise is reactive rather than proactive, therefore ad hoc. A peer illustrates this further: ‘Council seems to come to light when there is a whistleblower about wrong-doing: firefighting’ (Participant PU1, Male, PhD, Director Research in University).

Inadequate capacity to carry out National Council for Higher Education mandate

The findings show that Council lacks adequate professional and technical capacity to carry out its mandate, which affects the quality and timeliness of the M&E exercises and reports. Inability to enforce the regulations and minimum standards; tendency not to monitor public HEIs, which makes enforcing of standards in private institutions seem punitive; leniency by allowing HEIs to operate for years without meeting the minimum requirements; failure to identify and effectively deal with illegal institutions, teaching unaccredited programmes; inability to provide timely feedback to institutions on M&E visits; inability to assess outcomes in HEIs; and conflict of interest where Council members are the same HEI’s managers.

These observations are indicative of inadequate technical capacity of Council staff, which is further compromised by low staffing levels. Therefore, Council has to rely on external peers, the majority of whom need training to be able to meaningfully engage in external M&E activities.

The Higher Education Act provides that the Chair of the Council is a University Vice Chancellor and over 60% of the Council are members from various HEIs. This is a potential for conflict of interest.

Funding gaps in supporting external monitoring and evaluation activities

Despite increased pressure and demand for M&E in HEIs, there is inadequate logistical support to successfully undertake this activity. Two peers noted the following:

‘There is usually need to assemble a big team of inspectors to go and visit an institution for external M&E. But because of financial constraints one finds that NCHE is compromised on the numbers and type of team members.’ (Participant PUV2, Male, Professor, Vice Chancellor)

‘The number of days usually given to carry out the monitoring and evaluation of a particular institution are not enough. This is usually put on the limited amount of money allocated to NCHE by the Government.’ (Participant PUV1, Male, Professor, Former Vice Chancellor)

Inadequate funding propagates other undesirable consequences including Council’s inability to interact with the wider public as observed by a peer:

‘No constant interface with the Council by the stakeholders like the students, staff, parents and the community. The Council is so detached from all the other stakeholders; this makes it hard to have its ears on the ground concerning the conduct of institutions.’ (Participant PU2, Female, PhD, University Lecturer)

Other undesirable consequences related to limited funding include limited and untimely feedback by NCHE to HEIs after M&E visits, and inability to regularly and in detail monitor HEIs in areas including learning and teaching, human and financial resources.

Non-compliance of higher education institutions with monitoring and evaluation activities

The peers observed that some HEIs had a negative perception on the role of Council in their institutions, which interferes with the smooth running of the external M&E activities and threatens Councils’ legal mandate. Non-compliance by HEIs included refusal to abide by the Act and statutory instruments and guidelines, unpreparedness for the M&E exercise, poor record keeping and gaming. A peer said:

‘Some institutions borrow materials and personnel who are staged to appear legit before the visitation team and it is usually not obvious to the visitation team that such materials or personnel do not actually belong to such institution.’ (Participant PN2, Female, Legal Consultant)

For instance, some peer M&E teams were shown new library books and computers during M&E visits to some HEIs. When they returned for follow-up visits, the books and computers were no longer there. It is rather difficult to ‘stage’ personnel because HEIs in Uganda share staff because of low staff levels. However, lack of cooperation and preparation of the HEIs being evaluated, as well as their inability to perceive the exercise as beneficial to them, waters down the external peer M&E exercise.

Political and legal interference with external monitoring and evaluation

Good political will at institutional and national level is healthy for M&E exercises whose ultimate decision has implications on the future of a HEI and the national HE ecosystem. Some of the peers in this study indicated external political interference in Council decisions resulting from external M&E activities. For instance, courts of law have time and again issued orders stopping Council from closing non-compliant HEIs. Such political interference weakens the regulatory role of a semi-autonomous national body such as NCHE.

Peers’ suggestions on improvement of external monitoring and evaluation

A presentation and analysis of responses to this question shows that the peers are keen on having the current challenges in the external M&E process addressed by the responsible actors. The following suggestions were made:

  • Government should amend the law, the statutory instruments and the Quality Assurance Framework to provide for closure of non-compliant HEIs and to control for conflict of interest in the membership of NCHE.
  • Council should adequately plan before undertaking external M&E by:
    • benchmarking other regulatory agencies
    • designing and implementing strategies that enable the identification, recruitment, rewarding and retention of key M&E experts for better results
    • providing adequate notice to experts to partake in the M&E exercise
    • delivering literature to experts timeously to adequately prepare for the M&E exercises.
  • Council should address its professional and technical capacity gaps to be able to effectively and efficiently fulfil its mandate by increasing staffing levels; continuously building staff capacity; developing automated M&E system for institutions to record required information; ensuring timely feedback to institutions, strictly enforcing the regulations; organising a mapping exercise for all institutions to identify the illegal ones; and auditing both public and private institutions. Further, the peers suggested that Council should continuously build a databank of reliable and competent experts; train staff and students and other interested stakeholders in basic M&E procedures to ensure sustainability; establish M&E regional desk offices; involve local education authorities and form a joint East African Community of National Councils or Commissions of Higher Education to share best practices.
  • Government and Council should address current funding gaps by diversifying sources of funding and increasing government subvention to Council for M&E activities.

The above proposals imply the need to have adequate planning for external peer M&E by the Council. An M&E schedule drawn out before a financial year starts is likely to support this exercise, making it more proactive than reactive.

Discussion of findings

External peer M&E of HEIs has the advantage of enhancing institutional productivity, students’ learning outcomes and efficient utilisation of resources because it focuses on a broad range of activities and processes including governance, teaching and learning, staff and student welfare and support services, staff professional development, financial management and the role of the institution in the broader community. According to the UNESCO Report (2009b:8), the ultimate aim of external M&E is to deliver the best in terms of services and programmes and to remain accountable to students and other constituents.


However, fundamental to any M&E exercise is the capacity and willingness to objectively assess and evaluate programme and service delivery (UNESCO 2009b). This implies that the external M&E process ought to be systematic, right from its inception, planning and implementation to the utilisation of data. Both the NCHE and the HEIs being monitored and evaluated should be well positioned to benefit from the exercise.

Since the early 1990s, M&E has seen a steep climb within Africa in terms of practice, and as a profession and academic study. As a field of study, M&E is taught in several HEIs from first degree to PhD. As a field of practice, specialised departments housing M&E practitioners now exist and the demand for evaluation of policies, projects, programmes and interventions remains on the increase (UNDP 2013). Despite these efforts, there still exists need to strengthen M&E capacity through continued institutional support and training in M&E designs, data management and utilisation. According to the UNESCO Report (2016):

the success of the development of M&E systems, not the least through the use of technology, depends on the ability of the system to utilize the tools by having well-trained personnel to handle it for M&E purposes. The capacity needs range from analysis and policy formulation, data management, upgrading the skills of staff doing work that demands higher levels of IT skills, and proficiency in handling a large mass of data, to providing needed training for statistical or database management. (p. 25)

The findings in this study indicate a capacity gap in M&E at the NCHE, which needs to be continuously addressed. In a report on sector M&E systems in the context of changing aid modalities in the case of Uganda’s education sector by Holvoet and Inberg (2012), the neglect of M&E capacity development is noted to be particularly surprising from the perspective of budget support development partners who are supposed to rely on country M&E systems for their own accountability purposes.

In addition, despite an increasingly centralised standardised approach to external M&E indicated in this study, there is little analysis of the rationale behind the methods because there is little exploration of what ‘quality’ is in the context of higher education. There are various stakeholders in HE, and each of them has a different perception of what quality HE is. Harvey and Green (1993) note the diversity in the interpretations of quality, many which are contextually determined. For instance, to students and guardians, quality means value for money, whereas to accreditors and processionals, it means high standards and excellence. Brennan and Shah (2000) contend that quality can have an academic, managerial, pedagogical or employment focus. These various interpretations have implications on how quality ought to be monitored and evaluated.

Despite good intentions, M&E has because of its centralised structure at national and institutional level become over-bureaucratic and political, hence likely to hamper the potential for significant and sustainable changes in HEIs resulting from M&E undertakings. The over-bureaucratisation of external M&E in HEIs has led to a focus on accountability rather than improvement. Despite its onerous and somewhat oppressive burden, the focus on accountability is a safe process for higher education because all that institutions have to do is wait for the summative external M&E rather than nurture their own internal M&E mechanisms. By focusing on accountability, the transformative potential of M&E is watered down.

According to the European Commission Report (2003:3), the basis of any monitoring system should be to bring support to local data analysis for local decision-making and to monitor progress in delivering the goals of the national (in this case higher education) development framework. The NCHE shares the M&E guidelines and tools with HEIs, thereby decreasing transaction costs and facilitating policy dialogue and monitoring at institutional level. The guidelines and tools are fairly flexible for review through the correct fora. This is good practice.

With regard to the scope of the external M&E, findings in this study show that more focus is placed on input measures, that is, land, governance structures, staffing, education facilities, finances, student enrolment and research and publications (NCHE 2014) and less on activities, outputs, outcomes and impact. However, elsewhere, because of the greater demand for societal accountability, it is no longer sufficient to exclusively focus on inputs. According to Altbach et al. (2009), evaluators are looking for new data and indicators that demonstrate that students have mastered specific objectives as a result of their education (learning outcomes), interaction between student and faculty, career expectations, completion and success in finding a job. Few HEIs in Uganda (including NCHE) undertake systematic periodic graduate tracer studies and employers’ expectations surveys to monitor and evaluate the outcomes and impact of HE. The European Commission Report (2003) highlights a broad consensus among EU donors to focus monitoring on education outcomes and results in the following aspects: access to education, efficiency of the education system, and learning outcomes, complemented by the more transversal dimensions of equity and quality (p. 7). Again, in the case of HE in Uganda, although it is easy to monitor and evaluate access to education, it still seems problematic to monitor and evaluate efficiency of the education system and learning outcomes. The same report notes that efficiency of the education system is important but difficult to measure; yet in the context of the move towards increasing emphasis on learning outcomes, it is important to conceive efficiency indicators principally as proxies for learning outcomes. The prime indicators for system efficiency in this case are graduation, completion, retention and dropout rates, which information is hardly available at NCHE and in HEIs in Uganda. This hampers decision-making, for instance matching HE training to national level manpower planning and recruitment.

The NCHE has designed some M&E indicators to measure progress including student–lecturer ratio, student–lecture room ratio, student–computer ratio, student–book ratio, and student–internet per hour ratio. However, a large chunk of quality indicators related to curriculum development, teacher commitment and utilisation of pedagogic materials are yet to be developed to guide external peer M&E. Noteworthy, the NCHE guidelines and tools principally focus on quantitative measures requiring ‘yes’ or ‘no’ responses with regard to land, governance structures, staffing, education facilities, finances, student enrolment and research and publications because quantitative measures are easy for accountability and evidence-based policy making at institutional and national level. From this study’s findings, HEIs yearn for a balance of both quantitative and qualitative indicators to guide decision-making.

Lessons learnt and recommendations

Quality assurance agencies responsible for institutional and programme M&E are under pressure from multiple constituencies to address ever more complicated expectations (Altbach et al. 2009). Uganda is no exception. However, in order to meet the expectations required of an external quality assurance agency in general and from external peer M&E activities in particular, the study highlights a number of key lessons learnt and recommendations.

According to UNESCO (2009b:iv), for any country to provide its citizens with an improved quality of life, it must include the funding of a higher education system that will help move citizens toward a better life through teaching, research, advanced employment and service. Therefore, the Uganda government and NCHE should mobilise funds to ensure that external M&E activities are carried out more effectively and efficiently. Availability of funds will ensure that the NCHE recruits adequate staff and builds the capacity of its internal staff and the pool of external M&E peers.

Council should benchmark other HEIs to share best practices in external M&E activities, including how emerging technologies afford new external M&E opportunities. Council should enforce quality assurance of the external M&E process by strengthening its data management system, reporting and data utilisation which will result in improved regulation of HEIs. This is because M&E is a central component of the process of management for results that improve performance. NCHE and individual HEIs must therefore ensure that there is an information dissemination plan either in the M&E plan, work plan or both. The information obtained should be user-friendly to generate more knowledge and make informed management decisions for continuous improvement of HE inputs, processes, outputs, outcomes and impact.

Council should continually update its M&E database, because the creation of a central database improves the demand for and use of data (Holvoet & Inberg 2012). With a good data management system, NCHE could critically reflect on the reports submitted by the peers engaged in the M&E exercise to inform and improve subsequent exercises and sustain a robust M&E system. Through external M&E exercises, HEIs could also be guided on how to improve current and future management of outputs, processes, outcomes and impact, having invested a lot of time and resources building such institutions.

Council should continually build the capacity of peer evaluators in the core external M&E focus areas, participate in the development, requirements and uses of M&E systems and tools, and share knowledge on M&E in the context of improved governance, accountability and effective development delivery and results. Capacity building in data production and quality should preferably focus on the full data chain, from collection of data to the elaboration of progress reports at NCHE level.

Higher education institutions should develop mechanisms of continuously engaging in self-monitoring and evaluation activities. A peer in this study made the following observation:

‘Self-assessment by institutions is essential and should not be overshadowed by the NCHE rules. Let universities hand in the requested annual reports and leave them continue their work. If the report is well made the NCHE will have no problems with it and their visits will be well received.’ (Participant PU15, Male, PhD, University Lecturer)

Practitioners and researchers have underscored the involvement of stakeholders in collaborative formative peer M&E activities for performance improvement at institutional, national and international level (Keig & Wagoner 1994; UNESCO 2016). The key stakeholders and strategic partners in external M&E include professionals, academics, education managers, government and line ministries, politicians, development partners, accrediting agencies, proprietors of HEIs, students, parents or guardians, civil society, non-governmental organisations and the local community where the HEIs are located. However, each of these has their own perceptions and needs, which have to be considered. When internal peer M&E exercises are well coordinated, for instance through active involvement of faculty, the need for external ones will be reduced and therefore costs reduced, while at the same time faculty academic freedom will be enhanced. Internal self-monitoring and evaluation has the ability to foster representativeness, accuracy and typicality of what is evaluated, while at the same time accommodate the values of those engaged in the exercise and provide opportunities for professional growth and development.

According to the UNESCO report (2009b:54), there is a shift in the process for evaluating HE from government authorities (traditional) to peers in most of the world. Most quality assurance schemes begin with a self-study or self-review of the institution or programme being evaluated. The self-study obliges an institution to undertake a thorough examination of its own practices, resources and accomplishments with an eye toward measuring performance against its mission and identifying ways to improve. The internal M&E report is considered by a team of external evaluators who visit the institution and write a report of their own, assessing the validity of the self-study. The process usually includes an evaluation or inspection of the effectiveness of the internal quality systems. Although this procedure takes place in the monitoring and evaluation of HIEs, the internal reports are typically compliance-driven, focusing on NCHE requirement and less on institutional uniqueness.

An important trend in quality M&E processes elsewhere is that institutions are now more often monitored and evaluated against their own self-defined mission and less often against an institutional model defined by a regulatory agency (Altbach et al. 2009). This approach has become increasingly necessary and important with the growing diversity of institutions and delivery systems. The framework that regulators then use for judging the quality of an institution may reflect one or more of the following criteria by Harvey and Green (1993): quality as excellence, quality as fitness for purpose, quality as fitness of purpose and quality as enhancement or improvement. However, in Uganda, even with this perception in mind, the external regulatory role still supersedes the internal self-monitoring and evaluation mechanisms.

With regard to the scope of external M&E, studies elsewhere show a shift from a focus on input measures to a focus on education system efficiency and on learning outcomes (Altbach et al. 2009; European Commission 2003). However, external M&E by NCHE overly focuses on input measures referred to as ‘Checklist for quality and institution capacity indicators for universities’ (NCHE 2014; Schedule 4 of the Statutory Instrument No 8 of 2005). This type of M&E is compliance oriented, and according to the UNESCO Report (2016:11), it is bureaucratically aimed to ensure that the educational institutions comply with predetermined standards and norms set by rules and regulations. Given that it is no longer sufficient to focus on inputs alone, NCHE should engage stakeholders in reviewing the current guidelines and tools for external M&E to include measures of activities, outputs, outcomes and impact of HEIs.

Institutions of higher learning and stakeholders must entirely trust the external M&E system by NCHE. In addition to accountability, a good M&E system will serve a variety of purposes including evidence-based policy making and results-based management (UNESCO 2016) at micro- and macro-levels. In evidence-based policy, evidence must be based on data that are comprehensive, timely, relevant and reliable. In contrast, in results-based management all actors, contributing directly or indirectly to achieving a set of results, ensure that their processes, products and services contribute to the desired results (outputs, outcomes and higher level goals or impact) and use information and evidence on actual results to inform decision-making on the design, resourcing and delivery of programmes and activities as well as for accountability and reporting (cited in UNDG RBM Handbook 2011, cited in UNESCO 2016:8).

Indeed, as recommended by Holvoet and Inberg (2012) and in the Ministry of Education and Sports M&E Framework (2002), in order to stimulate a culture in which information is used for decision-making, there is need for more strategic engagement in developing a robust M&E system and institutionalising the M&E function and a policy research and analysis function at Ministry of Education and Sports level. However, the same national strategic engagements should be cascaded down to NCHE level and to each HEI. In addition, the current Higher Education Law should be amended and the current NCHE M&E framework and practices periodically reviewed to match contemporary trends and needs, including the gradual shift from compliance to performance-based M&E. Policy makers at the Ministry of Education and Sports and NCHE should create a good policy environment to nurture the growth and development of institutionalised or internal self-monitoring and evaluation mechanisms geared towards a culture of continuous self-improvement.

Acknowledgements

The author is grateful for the funding received from the African Evaluation Association to attend the 8th International AfrEA Conference in Kampala in 2017 where she presented this paper.

Competing interests

The author declares that she has no financial or personal relationships that may have inappropriately influenced her in writing this article.

References

Altbach, P.G., Reisberg, L. & Laura, E., 2009, Trends in global higher education: Tracking an academic revolution, A report prepared for the UNESCO 2009 World Conference on Higher Education, published with support from SIDA/SAREC.

Brennan, J. & Shah, T., 2000, Managing quality in higher education: An international perspective on institutional assessment and change, Open University Press, Philadelphia, PA.

European Commission, 2003, ‘Development policy and sectoral issues: Human and social development tools for monitoring progress in the education sector’, viewed 05 December 2017, from https://ec.europa.eu/europeaid/sites/devco/files/methodology-monitoring-progress-education-sector-toolkit-200302_en_2.pdf

Harvey, I. & Green, D., 1993, ‘Defining quality’, Assessment and Evaluation in Higher Education 18(1), 9–34. https://doi.org/10.1080/0260293930180102

Holvoet, N. & Inberg, L., 2012, Sector monitoring and evaluation systems in the context of changing aid modalities: The case of Uganda’s education sector, O* Platform Aid Effectiveness
Institute of Development Policy and Management University of Antwerp, viewed 25 January 2018, from http://ugandaevaluationassociation.org/wp-content/uploads/2011/03/Uganda-education-sector.pdf

Keig, L. & Wagoner, M.D., 1994, Collaborative peer review: The role of faculty in improving college teaching, ASHE-ERIC Higher Education Report No. 2, ASHE, Washington, DC.

Ministry of Education and Sports, 2002, Monitoring and evaluation framework for the education sector, Ministry of Education and Sports, Education Planning Department, Kampala.

Ministry of Finance, Planning and Economic Development, 2015, ‘Budget speech – Financial year 2015/16’, delivered at the meeting of the fifth session of the 9th Parliament of Uganda on Thursday, 11th June, Unpublished report, Kampala.

Rahi, S., 2017, ‘Research design and methods: A systematic review of research paradigms, sampling issues and instruments development’, International Journal of Economics & Management Sciences 6, 403. https://doi.org/10.4172/2162-6359.1000403

Republic of Uganda, 2001, Universities and Other Tertiary Institutions Act, Government of Uganda, Entebbe.

Republic of Uganda, 2008, Government White Paper on Report of the Visitation Committee to Public Universities in Uganda, Ministry of Education and Sports, Kampala.

Republic of Uganda, 2016, The report of the Visitation Committee on Makerere University, Bringing the future to the present, Published by the Uganda Printing and Publishing Corporation.

Sekaran, U., 2003, Research methods for business: A skill building approach, John Wiley & Sons, Inc. New York.

The Africa-America Institute, 2015, State of education in Africa report: A report card on the progress, opportunities and challenges confronting the African education sector, The Africa-America Institute, New York.

Uganda National Council for Higher Education, 2014, Quality assurance framework and the licensing process for higher education institutions, Published by NCHE, Kampala, Uganda.

Uganda National Council for Higher Education, 2016, The state of higher education and training 2013–2014, Published by NCHE, Kampala, Uganda.

UNDP, 2013, The rise of the south: Human progress in a diverse world, United Nations Development Programme Report, UNDP, New York.

UNESCO, 2009a, Manual for
monitoring and evaluating education partnerships, N. Marriott & H. Goyder (eds.), International Institute for Educational Planning, UNESCO, France.

UNESCO, 2009b, Student affairs and services in higher education: Global foundations, issues and best practices, R.B. Ludeman, K.J. Osfield, E.I. Hidalgo, D. Oster & H. S. Wang (eds.), UNESCO World Conference on Higher Education, Paris.

UNESCO, 2016, Designing effective monitoring and evaluation of education systems for 2030:
A global synthesis of policies and practices, UNESCO Education Sector
Division for Policies and Lifelong Learning Systems (ED/PLS) Section of Education Policy (ED/PLS/EDP), Paris.

The World Bank, 2004, A handbook for development practitioners: Ten steps to a results-based monitoring and evaluation system, J.Z. Kusek & R.C. Rist (eds.), The International Bank for Reconstruction and Development/The World Bank, Washington, DC.


 

Crossref Citations

1. Institutionalising the evaluation function: A South African study of impartiality, use and cost
Caitlin Blaser Mapitsa, Takunda J. Chirau
Evaluation and Program Planning  vol: 75  first page: 38  year: 2019  
doi: 10.1016/j.evalprogplan.2019.04.005