This research provides first-hand information about the field of development monitoring and evaluation (DME) in Egypt post the 2011 revolution. There is a great need for more effective, informative DME to hold government and development partners accountable for results achieved and meet people's needs and expectations. Both online and offline interviews were conducted with a purposive sample of 61 representatives of different stakeholder groups working in the field of DME in Egypt. Findings pointed to a lack of interest and understanding of DME, difficulty with accessing data required for satisfactory evaluation and the perceived limited effect of DME work on public policymaking. Respondents’ recommendations for enhanced performance included the presence of DME units in all government and NGO programmes, more intensive training to all parties concerned, creation of an umbrella DME agency, allocating of a sufficient budget and advocating for the cause.
Cette étude fournit des informations de première main dans le domaine du suivi et de lévaluation du développement (DME, development monitoring and evaluation) en Égypte après la révolution de 2011. Il existe un besoin important de DME plus efficace et informatif afin de tenir les partenaires gouvernementaux et de développement redevables des résultats obtenus et de répondre aux besoins et attentes des populations. Des entretiens en ligne et hors ligne ont été menés auprès dun échantillon raisonné de 61 représentants de différents groupes d'intervenants travaillant dans le domaine du DME en Égypte. Les résultats ont révélé un manque dintérêt et de compréhension eu égard au DME, la difficulté à accéder à des données nécessaires à lévaluation satisfaisante et leffet limité perçu du travail de DME sur les décisions de politique publique. Les recommandations des personnes interrogées en vue de meilleurs résultats évoquaient notamment la présence dunités de DME dans tous les programmes gouvernementaux et des ONG, une formation plus intensive de tous les intervenants, la création dune agence d'encadrement du DME, l'allocation d'un budget suffisant et le plaidoyer en faveur de cette cause.
Interest in this topic arose after the Arab Spring revolutions in several Arab countries and the increasing calls for holding governments accountable, with a special focus on the case of Egypt. Amongst the main reasons for the uprisings that occurred, which started in Tunisia and Egypt, were the lack of accountability demonstrated by the previous autocratic regimes towards their citizens, the inequity in distribution of wealth and lack of social justice.
With the revolution in Egypt and the calls by the people for bread, freedom, social justice and human dignity, the field of monitoring and evaluation (M&E) was impacted. In one way now we are starting a new era in which there will be a greater need for more effective, informative M&E systems to hold government and development partners accountable for results achieved to meet people's needs and expectations. Citizens and the international community are becoming more determined to hold governments and international development assistance partners accountable. Governments are under more pressure to account for the outcomes and impact of public policies, to revisit how they formulate policies and to share results with the public. Citizens and younger generations did not feel the benefits of economic reform enacted under the previous regime. Further, commitments made under previous regimes to eradicate poverty, enhance education and achieve social equity (especially with the introduction of the Millennium Development Goals) were not much felt by most Egyptians, especially the most vulnerable.
Today, the government is increasingly under pressure from the public and the international development community to deliver development results, to create jobs and to ensure effective management of public resources. Consequently, Egypt has already started to adopt systems that assess development assistance. A special unit within the Ministry of International Cooperation is responsible for assessing and gathering information on the volume of donor assistance given to Egypt and publishing information on the amount of aid that Egypt receives by sector. Whilst this type of report could be considered a good accountability tool vis-à-vis the public, it provides little information on how government delivers on its own policies. There is little information on existing M&E systems within the national machinery that assesses policy results and seeks to evaluate what works and does not work from a public policy perspective.
In this context, the term DME includes the traditional project-based systems of M&E adopted by international organisations to assess project effectiveness and delivery. It also includes the M&E systems of public policy, and the systems by which evidence is used in the policymaking process, and analysis for better public accountability and resource management (May et al. 2006). Whilst the distinction between the reporting, M&E functions is not clear-cut in public institutions, the findings presented in this article highlight that public organisations are increasingly concerned about documenting their work progress and highlighting their achievements. In light of the political changes that Egypt has undergone, this recognition is in line with the demand for greater accountability vis-à-vis the public and the international community.
On the other hand, the existing literature on national evaluation capacity, which looks into strengthening national institutions to carry out M&E in the public sector, has been mainly centered on the interlink between international development assistance and the performance of aid delivered to the recipient country. This debate, which is still alive in the aid circle, proposes entry points to mainstreaming M&E in national systems and is making progress in aid-dependent countries. However, in Egypt official development assistance represents almost 1% of gross national income, with contributions from multilateral agencies accounting for more than 80% of total disbursements in 2012 (MIC 2013). The pressure exerted by popular demand and pressure groups, including political and social groups emerging from the 25th of January revolution and subsequent uprisings in the Arab world, is the key force reviving the call for such accountability and performance systems to be established.
This article will provide a snapshot of the current M&E status in key public and private organisations in post-revolutionary Egypt. It will shed light on the existing systems, structures and institutional actors involved in the field of M&E in Egypt. The article is not intended to propose solutions that the glaring evidence of the gaps, institutional, capacity and other issue areas identify in the results as such, and will highlight future trends for improvement.
Results contained in the article are based on a readiness assessment tool developed by the authors. The readiness assessment provides a review of the task and resource status immediately available prior to an intervention. It also assesses the risks and the probability of effectively introducing change. For the purposes of this article, a readiness questionnaire was developed to assess the position of existing actors with regard to M&E capacity, utility, functionality and access to information. The assumption is that institutionalising and streamlining DME systems in these countries will enhance accountability for the results achieved and will provide the capacity to undertake evidence-based evaluation and assessment of public policies.
This article seeks to recognise the unique historic juncture at which Arab countries stand, particularly after the uprisings in Tunisia, Egypt, Libya, Yemen, Syria and subsequently in other African countries. Evaluation theory and practice is hence expected to adapt to this changing environment and contribute to the new managerial and accountability structures that these governments should build in. The purpose of the article is to assess the current situation of DME in Egypt, the challenges faced and potential pathways for future improvement. The article was concluded a few months prior to the revolution on 30 June 2013 (after the January 25 Revolution in 2011) in Egypt and has been enhanced throughout the first and second quarter of 2014. It is our expert opinion that the results hold up to date.
The article is divided into the following sections: the introduction, section one, which explains the research questions and methodology and contains the literature review, section two, which discusses the current situation of DME in Egypt according to the responses to a survey, section three, which responds to the question of where we want to go, and section four, which informs future debates about potential paths to institutionalise a DME system in Egypt.
Section I: Research problem, methodology and brief literature review
|
|
The main research question posed by the current article is how to get there: how to ensure a more effective, informative DME system in Egypt that holds government and development partners accountable for results achieved to Egyptians.
The methodology used for the research involves a literature review plus the adoption of a strategic management approach in the analysis to identify firstly what is the current situation of DME, what are the potential future pathways and how to get there (Figure 1). Both online and offline interviews were conducted with a purposive sample of 61 representatives of different stakeholder groups working in the field of DME in Egypt. The interviews continued until no new further insights were developed. For the analysis of the findings of the interviews, descriptive statistics was used for presenting quantitative findings of the closed questions and open-ended questions. For the latter, the responses received were categorised and analysed accordingly to depict trends. The findings from the interviews are presented in the different sections of the article following the three main questions of the strategic management model adopted by the article: where we are now, where we want to go and how to get there.
|
FIGURE 1: The strategic management perspective adopted in the analysis of the field of DME in Egypt. |
|
Quick glimpse at the literature on development monitoring and evaluation
The aim of this section is to give a quick glimpse of the recent developments in the field of DME and the diverse institutional set-ups emerging from it. The overview of the existing research and practice in this area internationally is relevant to assess more specifically the situation in the Middle East and Egypt and how international practice can apply in the Egyptian context.
Purpose of using development monitoring and evaluation
There are many purposes and uses for DME systems, ranging from project, program and policy M&E to better inform decision-makers and the general public to internal applications within organisations to improve management effectiveness, efficiency and to promote organisational learning by accumulating knowledge. Additionally, and what is of major concern to this research, DME systems can aid in the decision-making process through evidence-based public policymaking, improve organisational learning and enhance greater accountability and transparency within organisations and governments (Kusek & Rist 2005).
The spread of development monitoring and evaluation in the international development field
DME worldwide may have different goals and priorities in their outcomes, but one of the universal requirements is the need for a good articulation between different levels, from the policy level right down to individual projects. Globally, there is a growing demand for DME from internal and external stakeholders, donors and recipients. In the wake of an economic crisis in much of the developed world, donors are cognisant of the need for their resources to be spent effectively and, in many cases, are liable to withdraw development funding if projects are not seen to realise their intended goals. Taxpayers from the donor countries or from beneficiary countries, the ultimate backers of government-sponsored projects, are also growing more demanding in their pursuit of transparency and accountability of development projects financed with public money. Similarly, recipients of aid are also keen to get an overview of the scope and outcomes of development projects through a systematic and objective analysis (Ndikumana 2012).
The first examples of DME were within US-based institutions concerned with development, such as the United Nations and the United States of America Internatonal Development Agency (USAID), many of which remain at the centre of international development initiatives today (Segone 1998). In the 1950s, the approach to DME was focused on performing evaluations of development projects and attaching a monetary value to outcomes and outputs. In the course of this, measurement and comparison against other projects was the central concern of evaluations. Towards the 1970s, logical framework analysis was developed, emphasising the importance of clear criteria for assessing outputs and was used in project planning, implementation and M&E. The focus traditionally was on answering the basic question of whether the project or programme was implemented according to set criteria (Kusek & Rist 2005; Segone 1998).
In the 1980s, DME practices became commonplace in Europe and evaluation of development projects was institutionalised by a number of international development agencies. Furthermore, the focus of DME shifted towards providing accountability for and ensuring transparency in project implementation. This was in response to public and government demands to gain a detailed overview of exactly how public funds were being utilised in the context of government-funded development programmes. This expansion in using DME tools coincided with the serious problems in measuring the empirical success of the structural adjustment programmes implemented by the International Monetary Fund (IMF) and the World Bank in the least-developed countries following a series of global economic disasters during the late 1970s (the oil crisis, debt crisis and global economic stagflation). These policies were pursued in part to promote the economic tenets of neo-liberalism, as spelled out in the Washington Consensus doctrine. Over time, it has become extremely difficult to estimate the success of these programmes. A study in the World Development Journal shows that the programmes 'often do not work’, citing ‘high rates of recidivism, low rates of completion, and an insignificant catalytic effect on other capital flows’ (Bird 2001). Development practitioners and international development institutions have come under pressure to give evidence that programmes do work and experienced an increased demand for accounting the counterfactual, that is, what would have happened had the fund or the programme not intervened, providing evidence that their programmes works.
Change in focus of development monitoring and evaluation
The most recent generation of DME theory and practice moved towards understanding and learning through evaluation and using DME reports as a key tool in decision-making. DME became less concerned with aGppraisal of projects and more with measuring the long-run outcomes and potential impact of development programmes (Segone 1998).
Equally, positive accountability as an objective is increasingly seen as desirable in the context of a development project. Development agencies have begun to internalise evaluation processes in such a way that evaluation has become a concern for each individual involved in a development project, rather than being centralised in one evaluation unit of an agency. Further, the series of shocks experienced at the global level since 2005, including the avian influenza epidemic since 2006, the food, fuel and financial crisis of 2007–2009 and the challenging macroeconomic environment accelerated by the increase in armed conflict in many developing and poor countries, has led key international development agencies and some donor actors to revisit their resource mobilisation strategies and to use evaluation to focus on learning emerging from development practices to better inform decision-making. Consequently, there has been a gradual shift to gear evaluation practice to capitalise on learning what works and does not work in a development context and learning that supports the design of development strategy.
Critical success factors for development monitoring and evaluation systems
There is a wealth of existing research outlining the critical success factors for development projects. Successful M&E is a feature of the long-run success of development programmes. DME itself is also subject to critical success factors, particularly related to its perception amongst stakeholders. In order to be successful, DME must be perceived as a positive contribution to development projects, particularly in respect of government stakeholders. In some instances, DME reports that provide critical feedback that reflects negatively on public stakeholders are ultimately seen as attacks on government. To that end, governments in recipient countries in particular are often wary of DME systems because of the potential political implications of negative reporting. Consequently, a mutual appreciation of the benefits of DME amongst internal and external stakeholders is essential in the pursuit of a relevant and integrated M&E system in an institutional context (Kusek & Rist 2005).
Another key factor that influences evaluation is the link between evaluation and politics. Although this link is not clearly spelled out and extensively researched as evaluation methods and approaches are, the political set-up has a direct effect on the practice of evaluation. Vestman and Conner (2008) distinguish three positions on how evaluation and politics are related. The first position is the value-neutral evaluator: politics and evaluation can be and are kept apart; the evaluator is a conscious actor whose role is to gather and analyse the information, whilst the politician or the information user has control over the usage and utility of this information. The second position is the value-sensitive evaluator: evaluation is always situated in a political context and within institutional and political constraints that contribute to setting the agenda. In this view, conducting evaluation ‘is perceived as a technical expertise to measure quality and performances through prefabricated schemas and formula’ (Vestman & Conner 2008:57). The current state of DME in the international development community fits more within this position, where the planning and performance of organisations are understood through a series of indicator-based management tools. This includes results-based management and managing-for-development-results approaches, in addition to other set of series that emerge from conceptual frameworks that place more emphasis on accountability and financial feasibility. The authors argue that in this set-up ‘citizens are transformed to consumers that make choices in a market of health care, education, social welfare etc. Evaluation is seen as a practice that can guide consumer's choices’ (Vestman & Conner 2008:59). The third and final position is the value-critical evaluator: politics is integrated in the evaluation practice and constitutes the theoretical framework through which human knowledge and actions are interpreted. In other words, evaluation does not only generate and disseminate results, it provides deeper and better understanding of the evaluation object (Vestman & Conner 2008:63). This takes into account a number of key variables such as context, organisation, actors, culture and structures. In light of the above three positions presented by Vestman and Conner, the political context is a key determinant in setting the purpose and utility of evaluation and evaluators are expected to maximise the benefits of evaluation and politics and minimise its risks.
Another key factor for the success of DME is in terms of both supply and demand is the concerned citizenry. Understanding of DME and its usefulness as a tool in achieving transparency and accountability has also been cited as a key success factor. As alluded to above, the institutionalisation of DME is viewed as a political risk by many governments and, as such, demand on the part of its electorate can go a long way towards encouraging the implementation of DME practices. Indeed what this amounts to is the involvement of civil society in partnership with government in the pursuit of successful, transparent and accountable development programmes. When a willingness to implement DME is not present amongst those two stakeholders, the effectiveness of evaluations is severely undermined (Burdescu et al.
2005).
The ideal outcome is an institutionalised DME system that seeks to ensure that decision-makers, citizens, civil society organisations, development partners and other stakeholders are well informed and empowered within well-balanced relationships and with an ultimate positive impact on development.
Section II: Where we are now
|
|
The main development monitoring and evaluation stakeholder groups studied in Egypt
Ten main stakeholder groups were identified as having an influence and interest in the field of DME in Egypt. We tried to identify the main role played by each of those different stakeholder groups. The objective of the survey was to solicit primary information from the DME field about the current status quo of the practice, the perceived vision for more effective DME and the prerequisites for achieving that vision. The survey instrument was structured based on those three main pillars. Questions were asked about the level of perceived interest in DME, the practitioners’ experience, education and skills, the type of work performed, the audience for the DME work, the means of dissemination and the usage of the DME results, the challenges encountered, whether the respondents perceive their work has an impact on public policymaking in Egypt, how the field was affected by the January 25 Revolution and then, most importantly, the vision for more effective DME work and recommendations about how to realise that vision.
Development partners (donors)
Developments partners, more commonly known in Egypt as donor agencies, play a major role in the field of DME. Many of the well-known tools and techniques of DME such as the log-frame were originally developed by donor agencies. Different bilateral and multilateral agencies use different templates and different jargon in their DME tools, mainly to improve aid effectiveness, but they all emphasise its importance and work on disseminating it as the predominant culture amongst their beneficiaries. We mention here a sample of some of the well-known donor-sponsored DME activities and capacity-building programs in Egypt:
-
UNICEF courses empower the Egyptian labor forces, for instance the ‘Meshwary’ project provided its Egyptian members with entrepreneurial skills, career guidance and employability. Another course is the ‘Development’ course, which provided participants with valuable knowledge about social behaviour and communication.
-
The online ‘My M&E’, which is a knowledge sharing website that enables people from all over the world to gain knowledge and information. It includes various interactive multilingual e-learning programmes and has participants from Egypt as well as from other parts of the world.
-
The United Nations Development Programme (UNDP) has been a development supporter for Egypt since 1953. Its goal is to assist the Egyptian government in its strategic plans regarding sustainable development, crisis prevention, environment issues, democracy plans and reducing poverty. UNDP supports a number of key Egyptian ministries to build capacity using results-based management approaches into their relevant business-as-usual practices.
-
The World Bank through its CLEAR initiative organised a number of training sessions on M&E in Egypt in cooperation with the Arab Administrative Development Organization (ARADO), an organisation affiliated to the Arab League of Nations and mandated with the mission of developing the management capacity of Arab government, business and non-governmental organisations.
Universities and scholars
Many universities in Egypt have programmes that deal with DME in different ways. We mention here some of these programmes offered through the American University in Cairo:
-
Through the Social Research Center, a three-month intensive training course is offered covering different qualitative and quantitative research methods. Several modules within the three-month course deal with M&E tools and techniques.
Through the Public Policy and Administration Department at the School of Global Affairs and Public Policy (GAPP), several courses at the master's level tackle within their syllabi DME-relevant issues and topics. Sample relevant courses that include discussions of DME issues include:
Public Policy Analysis and Evaluation.
Strategic Management for Public Policy and Evaluation.
Nonprofit Management.
Management of Development Organisations.
Through the Political Science department at the School of Humanities and Social Sciences, there is a master's degree offered in International Development, which also tackles DME issues.
GAPP Executive Education offers training courses to government and nonprofit managers on performance management.
Political parties and movements
After the 25 January Revolution in Egypt many new political parties and political movements were formed, many of them led by young revolutionaries. These parties are still working on developing their organisational structures and governance systems, but the majority, if not all, are keen on finding ways and means of holding government accountable and analysing and assessing public policies. The degree to which this emerging movement will have a strong stake and interest in DME depends on the degree of activism and dynamism civil society organisations and political parties have in the post-revolutionary phase in which Egypt is currently living. The past period has been characterised by drastic political change and demonstrations.
National development monitoring and evaluation associations and networks
In 2005, the Egyptian Development Evaluation Network (EgyDeval) was brought into being after the International Development Evaluation Association's (IDEAS) first biennium in India. It started with five members and now boasts a membership of around 20 professional evaluators from the United Nations, international NGOs, local NGOs and government agencies. It aims, as stipulated in its founding documents, to enhance efficiency and effectiveness of development evaluation in Egypt. This is expected to take place through developing evaluation capacity in Egypt through six working areas: evaluation practice support, professional and technical support, information dissemination, capacity-building, networking and experience sharing. The network is composed of a diverse group of people, mainly academics and development practitioners. The group wanted the field of evaluation to be nationally recognised as a profession and to set the basis for national guidelines to guide Egyptian evaluators. By doing so, they initiated a process of availing M&E training and knowledge material in Arabic for Arabic-speaking practitioners and contributed to the translation of a number of existing tools in M&E, a wide range of which are now available to the Arabic-speaking public, through its partnerships with the Middle East and North Africa (MENA) evaluation network EvalMENA.
EgyDeval members have presence in major DME conferences and Associations such as African Evaluation Association (AfrEA), IDEAS, EvalMENA, United Kingdom Evaluation Society, East Africa Research and Evaluation Network and International Organisation for Cooperation in Evaluation (IOCE). EgyDeval has run a development evaluators discussion group since 2007 and conducts quarterly experience sharing meetings, which host both international and national consultants. EgyDeval is currently undergoing registration procedures, seeking partnerships and funding to implement a strategic plan developed by its members to fill the gap in needs for development evaluation in Egypt. One of the challenges faced by this group is that it is difficult to get established and be formally institutionalised.
A later DME project in Egypt called the Egyptian Research and Evaluation Network (EREN) was initiated in 2008 through a UNICEF conference on research and evaluation in Egypt. It aims at creating a platform for research and evaluation capacity development, knowledge generation and dissemination, dialogue simulation and experience sharing and advocacy. The EREN embarks on realising its aims through conducting conferences, offering capacity development opportunities, Arabising evaluation texts and building partnerships with a wide range of stakeholders. Currently, EREN has around 200 members in the various fields of research and evaluation. The idea started in 2008 amongst 13 national experts, university professors and development practitioners. By 2012 there were 151 members and attempts were being made towards formal registration.
The network has been active in initiating a capacity-building initiative offered in Arabic; this will include a certificate or diploma with national universities on M&E, as well as a mini-International Program for Development Evaluation Training (IPDET) course offered in Arabic. Recently, the network joined the EvalPartners initiatives and has received a peer to peer award to undertake advocacy activities in the region and with African voluntary organisations of professional evaluation (VOPEs).
The Affiliated Network for Social Accountability (ANSA), which is registered under CARE, is an initiative that started work in the Arab world in October 2010.
Social accountability refers to the ways and means through which citizens, civil society organisations and other non-state actors can hold public institutions, programmes and services accountable for their performance, using an array of mechanisms (Care 2014).
The level of coordination between EgyDeval and EREN has been through the membership base overlapping between the two networks. Whilst EgyDeval's main work focused on advancing the M&E practice in Egypt, availing research material and tools in Arabic and identifying competency guidelines, EREN on the other hand was focusing on thematic issues pertaining to evaluation findings and the work of UNICEF in Egypt and engaging more actively with national entities. Both networks coordinate their work to streamline their areas of work.
International, regional and continental civil society organisations
Egyptian development practitioners and evaluators have been involved with regional and international VOPEs through different forms of engagement. Organisations and networks such as IDEAS, IOCE, EvalMENA and AfrEA are considered important platforms for interaction and networking, learning and professional development. Most of these organisations and associations have branches or chapters operating in Egypt and they have an impact on the field of DME capacity-building through their membership base. The year 2009 was a marking year for evaluation in Egypt. It marked AfrEA's fifth international conference, which was held in Cairo. The conference was organised in partnership with the government of Egypt, through the Information and Decision Support Center, the prime minister's think tank and UNICEF's office in Egypt. AfrEA is a pan-African association established in 1999 and has over 25 registered associations involved in the field of M&E of development in Africa. With its headquarters in Ghana, AfrEA works towards strengthening a culture of accountability and evaluation in public and community service by supporting the development and growth of national evaluation associations and fostering an environment of thought leadership in evaluation. The AfrEA conference presented an opportunity for several Egyptian development practitioners, civil servants, academics, students and evaluators to attend the conference and the professional learning workshop week organised before the conference. Also, the conference presented a fruitful platform for Egyptians to learn about continental issues on evaluation, particularly in Africa, benefit from the different services provided by the association and to have an official representation within its governing body and membership (AfrEA 2014). EvalMENA was born the same year, following the AfrEA conference. Arab and Egyptian participants joined forces to form the first MENA network, composed of nearly 13 countries from North Africa and the Middle East, its liaising office based in the American University of Beirut, Lebanon. EvalMENA is active in mobilising members and increasing its membership base, availing evaluation knowledge and training materials in Arabic and establishing a regional governing body for the network. It came out of a series of research grants by the International Development Research Centre. The current vision of EvalMENA is to see development actions (projects, programmes, research and development activities, etc.) performing better and to utilise evaluation findings to enhance their performance. EvalMENA is playing an important role in producing and disseminating Arabised publications. The mission of the project is to see a critical mass of qualified and internationally acknowledged evaluators coming out of the MENA region.
Further, IDEAS has been active in Egypt through its membership base, which is mainly composed of alumni of the IPDET held at Cornell University every year. Its members interact and exchange information through a mailing list and IDEAS conferences. Since 2010, the IOCE has become known to Egyptian development practitioners and evaluators and accelerated with the launch of the EvalPartners initiatives. IOCE was established in 2001 as a global forum for formal and informal evaluation networks worldwide and has been the leading partner for the EvalPartners initiative.
Through IOCE and EvalPartners, Egyptian VOPEs are able to gain more exposure to the wider evaluators’ community and interact at a regional and global level. For example, the International Forum on Civil Society's Evaluation Capacities, organised by EvalPartners and IOCE and held in Chiang Mai, Thailand, in 2012 presented an opportunity for Egyptian participants, along with participants from all over the world, to confirm their commitment to enhancing M&E theory and practice, declaring the year 2015 the Year of International Evaluation. These type of events and platforms have contributed toward the building of a body of Egyptian professionals, and provides a healthy organisational support for national practitioners to become part of an international community committed to stronger evaluation practice, capable of advancing specific evaluation issues at the national level.
When discussing DME in Egypt, one of the first institutional development monitoring initiatives was that started in 1999, by Dr Medhat Hassanein, former Minister of Finance. He set out to implement a capacity-building M&E programme in the Egyptian public sector, starting with the finance sector. The program received widespread backing from other Egyptian cabinet ministers and also from multilateral development partners such as the World Bank, IMF and UNDP. Following the development of a policy paper in 2000, the ambitious portfolio of public finance reform began implementation and the outcome was the preparation of the first performance-based budget for the nation, together with plans for a revolving budget for the first time within a span of one-and-a-half years, the ultimate lifetime of the project. Those developments were ready for soft approval by Parliament in its 2004–2005 session, to be followed by a legislative amendment to the budget law. However, in July 2004 a major cabinet shuffle removed the ministers who were key sponsors of the original DME initiative, resulting in its ultimate cancellation.
Other than this initial work, there are elements of DME work practiced in several other ministries and government organisations within their planning and monitoring units, such as in the Ministry of Finance, the Ministry of Electricity, the Ministry of Industry and Technological Development and the Ministry of Local Development. A more specialised DME unit is present within the Ministry of International Cooperation under the name of the Project Evaluation and Macro Economic Analysis unit (PEMA). The main drawback about PEMA's work is that the reports produced are confidential and are not available to the public.
Research centres and think tanks
Much of the work of research centres and think tanks, whether independent or supported by government, focus on policy M&E. An example of a government-affiliated think tank is the Information and Decision Support Center (IDSC) affiliated to the cabinet of ministers and considered the main decision support unit for the cabinet. IDSC produces many different types of publications including books, reports, working papers and poll results (IDSC 2014). Although IDSC produces robust reports and policy evaluations, its affiliation with government detracts from its independence and perceived objectivity. Other non-government affiliated think tanks include the Egypt Center for Economic Studies (ECES), Partners for Development (PID) and many university-based research centres, such as the Public Administration Research and Consultation Center (PARC) and the Center for Economic and Financial Research and Studies (CEFRS) located within Cairo University.
Parliament and legislative bodies
Until 2012, Egypt had two houses of parliament: a lower house, the People's Assembly, and a higher house, the Shurah Council. In order for both houses to be able to hold the government accountable, they must be involved in M&E of policies and performance. However, according to the Egyptian constitution for 2013, the second house of parliament is suspended and Egypt will suffice with one main house of parliament.
Since the revolution, the media in Egypt has had a stronger presence and become more influential as a watch dog. People felt empowered after the revolution and both the self-imposed and the government-imposed censorship on all media channels were alleviated to a great extent. To date the Freedom of Information draft law, prepared by the government in coordination with Egyptian rights groups, has not come into play. It is expected be presented to Parliament once its members are elected. The law allows journalists and citizens to access information issued by the state and for greater transparency on behalf of the state to obtain information that is normally not available to journalists and citizens, such as governmental reports and statistics.
DME in Egypt, as in any other parts of the world, is part of the development business and there are many specialsed consultants and M&E professionals who make a living out of providing their services to organisations in need of performing DME work.
Empirical study findings of where we are now
This section presents a quick snapshot of the DME field in Egypt as revealed by the empirical study and the 61 interviews conducted by the authors with the various groups of stakeholders.
As is evident in Figure 2, interviewees represented the 10 different stakeholder groups identified earlier. The most prominent stakeholder groups in the sample are independent consultants, academics and international development partners. The respondents represent the different categories of actors working in the field of DME in Egypt. As noted, in many cases each respondent may fit in more than one group of stakeholders identified; however, they were asked to select the most prominent category that applies to them. An independent consultant may be a university professor and may occasionally work with government or with a research centre. For that reason it is difficult to ascertain to what extent the classification in the sample matches the classification in reality, or to figure out the actual classification in the field. The researchers tried to have representation from all of the different groups of stakeholders and continued with the online and face to face interviews until there were no longer any new insights being derived.
|
FIGURE 2: Distribution of respondents based on work affiliation as related to DME. |
|
According to the study findings, almost 90% of those surveyed suggest that there is a lack of interest and understanding in Egypt of DME and its importance in the context of a development project and policymaking.
Two-thirds of respondents view DME as a demand-driven activity within their organisations (Figure 3). The demand for DME by development project stakeholders indicates that there is an appreciation of the importance of DME in the context of such a project. The idea that organisations are taking action to allocate resources to DME, both in reporting the impact and results of a programme and also in giving rise to suggestions for improving the design and implementation of a programme. This perceived importance could be explained by the increased commitment of resource partners to allocate budget for M&E. In a number of cases, respondents cited DME reporting as being commissioned in response to donor demands.
|
FIGURE 3: Respondents’ perceptions of whether current DME systems and activities are demand driven or supply driven. |
|
Respondents were asked specifically at what point in the project or programme cycle DME should take place. The majority suggest that DME should be conducted at each point in the project cycle (Figure 4), but there is a greater majority who see DME as a necessity upon completion of a project. Respondents discussed both M&E activities simultaneously and stressed the importance of end-of-term evaluation.
|
FIGURE 4: Level or stage respondent or organisation calls for DME. |
|
One aim of the survey was to establish what types of policy and programme documents are usually evaluated. Indeed, an evaluation carried out at sector level rather than at project level will provide much broader policy-specific and sector-specific insights and also recommendations that can be applied more broadly. On the other hand, the results of an evaluation carried out at project level are project specific, providing recommendations that are more detailed in respect of particular projects, but which may not be applicable to any other project. As such, project-level evaluation is only useful if a similar project is replicated.
Based on responses, the majority of organisations perform evaluation at the project level and to a lesser extent at program, policy and sector levels (Figure 5). This shows that organisations, for whatever reason, are more concerned with evaluating projects individually rather than on a broader level. This could be explained by the interest of organisations in internal and external accountability and transparency vis-à-vis resource partners. There could be scope for improving efficiency and utility of DME systems here, by engaging in more broad-level evaluations, the results of which would filter down to individual projects.
|
FIGURE 5: Type of DME in which respondent or organisation is involved. |
|
To add to this, Figure 6 shows that most DME activity is focused on evaluating project outcomes and less focused on evaluating the longer-term and broader-scope impacts. In analysing the responses to this question there are several caveats to take into consideration. Firstly, respondents may be choosing the most ‘politically correct’ response by stating that they do outcome evaluation more than output or input evaluation and, secondly, although the terms were clarified in the interview instrument, respondents may still have confused the outcome and impact terms because they are used differently by different organisations.
Most of those surveyed have significant – more than five years – experience in DME (Figure 7). As with respondent affiliation, inferences cannot be made on how this compares to the actual distribution of DME workers’ years of experience. In any case, the distribution of respondent experience gives certain validity to the empirical results as they are largely derived from individuals experienced in the area of DME. However, where the distribution in Figure 7 does not correspond to the population distribution, it is possible that these results fail to capture proportionally the views of less-experienced DME practitioners. This is particularly relevant in relation to perceived views on DME, as this is likely to be varied based on time spent in the industry.
|
FIGURE 7: Respondents’ experience with involvement with DME. |
|
The survey has shown that the DME activities most performed by the tested respondents are the midterm and final evaluations and desk research, which might not be as effective as other activities like, for example, field surveys and qualitative research. The least-used activities are sector evaluation, participatory evaluation, gap analysis and group surveys (Figure 8).
|
FIGURE 8: Type of DME activities performed by respondents or their organisations. |
|
Respondents are split about the easy access to and availability of data for DME. Slightly more than half feel that the data required to conduct a satisfactory evaluation of development projects is not easily accessible or available. This is not particularly conclusive, as an almost equal proportion of respondents feel that the necessary data is available. The disparity in responses could be due to differing resources across organisations, some of whom may have greater access to data due to economies of scale or similar strategic advantages. Furthermore, the survey question gave neither detail on the quality or accuracy of data, where it is available, nor the reasons impeding access to data. In order to conduct a quality evaluation of a development project, a programme or a policy, accurate data and uniform quality and criteria are essential.
The manner in which DME is implemented within organisations to which the respondents may be affiliated may also be significant in analysing its effectiveness. Half of the respondents surveyed implement evaluation through a work team, which is a team composed on an ad hoc basis of individuals to work on a specific assignment and deliverables. Units, departments and sectors are organisational entities with well-established staff members assigned to do DME work. It would seem then that organisations are inclined to deploy a team of specialised individuals to conduct DME activities across the organisation (Figure 9).
|
FIGURE 9: Type of work structure or organisation respondents have for DME. |
|
Within organisations undertaking some level of DME, the majority have fewer than five staff members involved in DME work (Figure 10). The implication of this is that DME makes up a relatively small component of the activities of the organisations surveyed, depending on the overall size of the organisation and its scope of operations.
|
FIGURE 10: Approximate number of staff members involved in DME. |
|
The average qualifications of DME-related workers are summarised in Figure 11. Based on the sample of 61 respondents, most workers have at least an undergraduate degree, with almost half possessing an advanced university degree. This, together with the years of experience alluded to above, would indicate that the DME workers surveyed are well educated and trained in the area of DME, lending some extra credibility to their responses. The caveat here is that other less-educated practitioners working in DME may not have been reached by our purposive survey.
|
FIGURE 11: Main qualifications of staff involved with DME activities. |
|
Respondents were given a drop-down list of DME methods and tools to choose from with the option of ticking all that applied. Results show that the most-used methods in DME activities are surveys and interviews. Results-based management (RBM) methods, outcome mapping and participatory approaches ranked low, which might indicate that evaluators and monitors still rely on traditional approaches and tools in conducting DME assignments (Figure 12). Whilst surveys and interviews could yield quantitative and qualitative information on programme performance if they are designed accordingly, the low use of alternative tools indicate that there is little concern or exigency to applying statistical significant evidence or exploring alternative tools to assess programme delivery.
|
FIGURE 12: Main methods and tools used in DME activities. |
|
Types of DME reports produced by the respondents are mainly end-of-programme or end-of-project reports, which take place after the project ends (Figure 13). Midterm and impact evaluations were also ranked high by respondents, whilst process reporting such as monthly reports, outcome and periodic reports ranked low. The focus on conducting end-of-project reporting as well as impact evaluations could be explained by the fact that the evaluation of development projects in Egypt is only happening at the end of the project at the request of a donor who wants to know that their project delivered. The absence of baseline is another feature of impact assessment; Figure 13 reveals that despite the increased focus on end-of-project assessment, evaluation tools are not utilised through the program cycle to enhance design and programme activity.
The survey also seeks to establish the basis on which DME reports are prepared and disseminated and in what context such reports are considered relevant. There was a mixed response on the frequency of DME report publication, the most common frequencies being annual and quarterly, with annual reports cited by 55.7% of respondents, followed by quarterly reports cited by 48.9% (Figure 14).
Additionally, many respondents reported that DME reports are prepared in a more ad hoc fashion, in response to project demands, mainly from donors. There also appears to be a difference in frequency depending on whether the report is prepared for internal or external use. Some respondents cited monthly project reporting internally within projects, whilst external reports are prepared less frequently.
The target audience of DME reports is an equal mix of internal and external stakeholders, with the latter referring primarily to donor organisations. A number of respondents noted that the distribution of donor funding is directly related to DME reporting in the form of a results-based evaluation of past projects.
The training underlying DME is an essential factor in its usefulness. Whilst it was established that DME practitioners are largely quite well educated, the survey also looked at what kind of specialised training is available. The main areas of training are in M&E, together with on-the-job training and RBM training (Figure 15).
|
FIGURE 15: Type of DME-relevant training received by respondent or other staff in organisation. |
|
Just over two-thirds of organisations surveyed, which are involved with DME-related activities, provide training internally and to other organisations. Much of the training conducted is done through on-the-job learning. This information correlates with the finding about the variety of tools and methods the respondents use to conduct DME work. The quality of evaluation is affected by the limited capacity-building provided to the staff. Also, most of the people involved in DME assignments are doing many other tasks aside from M&E. This is why we find on-the-job training ranking high, since the staff involved in other programmes may be requested to perform M&E tasks.
DME reporting is used within organisations to make improvements and changes to the structure of projects according to a quarter of the responses received (Figure 16). Respondents gave general answers as to how they used the DME reports in policy support, in communicating findings and introducing improvements in general. More concrete uses in allocating budgets, deriving lessons learnt and attracting new donor funding were less-frequently mentioned.
|
FIGURE 16: How respondents and their organisations make use of DME reports. |
|
When respondents were asked about how DME reports are disseminated, the most frequently mentioned responses were websites and online media, followed by dissemination through workshops to both internal and external stakeholder groups, general reports and dissemination to donor agencies (Figure 17). The less-frequently mentioned dissemination methods were through media, dissemination to government, workshops and newsletters.
|
FIGURE 17: How respondents and their organisations disseminate DME reports produced. |
|
The final portion of the survey was concerned with measuring the viability of DME reporting in Egypt and identifying the main challenges faced by DME practitioners. In general, there is a negative sentiment with regard to the effectiveness of DME in Egypt. Almost two-thirds of respondents are of the opinion that DME work has no effect on public policymaking in Egypt. Generally it is felt that DME reports can often be seen as ‘anti-government’ due to their potentially critical content and, as such, they do not receive the necessary attention to influence policymaking. More generally, it is not felt that the DME reporting is appreciated as a tool of planning or budgeting. That being said, within organisations, DME is seen as a useful tool when it comes to designing and streamlining projects and also in providing insights and strategic recommendations for future projects. However, this does not often translate into a tangible benefit for projects when it comes to decision-making at managerial and government levels.
The survey questions identified a number of challenges facing practitioners carrying out their DME work. The primary issue, facing 43% of organisations, is a lack of data access and inaccuracy in the available data related to DME (Figure 18). Similarly, even when the appropriate data is available, some organisations lack the skills and expertise to analyse the data appropriately. Both of these factors can undermine the effectiveness and credibility of DME reports by leading to conclusions that are not based on reliable data or analysis.
|
FIGURE 18: Main challenges and obstacles facing DME work undertaken by respondents and their organisations. |
|
Another significant challenge is a perceived lack of enabling environment for the development of DME theory and practice in Egypt. Respondents also identified a number of other challenges, such as national political instability, lack of political support, insufficient training and high costs associated with DME.
As discussed above, even though the value of DME is appreciated by many of the respondents within the development sector, amongst government and policymaking circles there is little incentive to engage in evaluative actions, particularly at the public policy level. Evaluation in the public affairs sphere could be perceived as high risk. In light of the political unrest the country has witnessed over the past three years, it is difficult to assess whether evaluation will feature high on the public policy agenda in the near future. Building an enabling environment for evaluation to integrate in the public policy sphere requires (1) the engagement of an active evaluators community, through national evaluation associations who are able to build linkages with the key public institutions, (2) free access to information and key public data and statistics that feed evaluative exercises and (3) increased interest from policymakers who have a vested interest in using evaluation findings in their respective campaigns and policy debates.
Section III: Where we want to go
|
|
Empirical study findings of where we want to go
Respondents were asked about whether they perceived the January 25 Revolution to have had an effect on DME practice in Egypt and to that there were mixed responses, with 56% stating that it had no influence and 44% stating that it did have an influence.
Respondents had different opinions about the revolution and how it had influenced the realisation of development objectives in Egypt (Figure 19). On the negative side, the revolution has resulted in a greater degree of economic and political instability, which not only makes DME reporting more difficult to conduct because of reduced funding and transparency, but also undermines its significance in the context of a country in crisis. On the other hand, the increased role of the youth after the revolution is seen as a positive development for DME. There is a greater will amongst the younger generation to hold development projects in Egypt to account and pay more attention to the details of how development organisations are going about realising their objectives for Egyptians. Some respondents also view the sweeping constitutional reforms that have followed the revolution as an opportunity for DME in Egypt, as they hope that some provision can be made for mandatory evaluations at the policy, programme and project levels going forward.
|
FIGURE 19: Examples of respondents’ comments on how the January 25 Revolution influenced the realisation of development objectives in Egypt. |
|
Figure 20 shows the respondents’ vision for a more effective DME system that is institutionalised and streamlined within a results-based framework in post-revolutionary Egypt. The main elements of the vision include institutionalisation, increased awareness, participation, a stable enabling environment, provision of training and capacity-building, transparency of procedures and results and greater dissemination.
|
FIGURE 20: Respondents’ vision for more effective DME work in post-revolutionary Egypt. |
|
Section IV: How to get there
|
|
This section attempts to figure out how to realise our vision of institutionalised and streamlined effective and efficient DME that empowers citizens and enables them to hold their governments accountable.
Empirical study findings of how to get there
Figure 21 shows what respondents recommended in order to institutionalise a DME system in Egypt. Amongst their main suggested measures were the presence of DME units in all government and NGO programs, the provision of training and capacity-building to all concerned, the focus on outcome and impact planning and M&E, the creation of an umbrella DME agency or network to spearhead the institutionalisation process, allocating sufficient budgets and strongly advocating for the cause. It was suggested that the existing Ministry of State for Administrative Development could play this role.
|
FIGURE 21: Respondents’ recommendations for how to institutionalise DME in Egypt. |
|
As for the most essential prerequisites for an effective DME in Egypt, the majority of the respondents agreed that institutionalisation and streamlining the business-as-usual practices of the different units of the state administrative apparatus and NGOs are the most essential. Enhanced capacity-building and professionalism were the next second most vital prerequisites. Capacity-building can be more effective and less time-consuming with ultra-educated individuals capacity-building initiatives (Figure 22).
|
FIGURE 22: Prerequisites for realising respondents’ vision for more effective DME in Egypt. |
|
Respondents seem to have split opinions on perceptions as to the feasibility of conducting M&E in post-revolutionary Egypt. Some were optimistic, some pessimistic and some were not sure what to think. Each group justified their opinions concerning the feasibility of institutionalising and streamlining a DME system in the post-revolutionary environment in Egypt, as shown in Figure 23. In crisis and conflict situations in which stabilisation is ongoing, such results seem expected. However, it is the opinion of the authors that the presence of DME in the current Egyptian context should be an imperative in the lives of Egyptians.
|
FIGURE 23: Extent to which respondents think DME institutionalisation prerequisites are feasible within the current post-revolutionary context in Egypt. |
|
Whilst the pessimistic group, constituting 38% of respondents, found the potential institutionalising of DME difficult to achieve, challenging and impossible to achieve in post-revolutionary Egypt, the optimistic group, constituting 34%, thought everything was possible very soon and that people after the revolution were more eager for development. Finally, a third group, constituted of the remaining 28% of respondents, were not sure of the future and thought that with the revolution still ongoing predicting the potential for institutionalising DME prerequisites in the near future was impossible.
The Egyptian government can build DME as a system that has the potential to significantly enhance the strategic performance and achievement of development targets for the Egyptian people. DME can also be considered as a performance feedback system, as it measures the outcomes and impacts of each public policy or development-related decision. At present in Egypt, especially after the revolution, the government urgently needs progress tracking and evaluative knowledge tools that are effective and unbiased to demonstrate and measure the results of each policy.
A summary of the main findings of the empirical study conducted on a purposive sample of 61 scholars, practitioners and academics representing the 10 different identified actors in the field of DME in Egypt reveals the following about the current situation:
90% of respondents suggest there is a lack of interest and understanding of DME.
67% view DME as demand driven, usually initiated by or in response to the needs of the funding agency.
The majority of evaluations are conducted after completion and more are done at project level than at programme, policy or sector level.
Most DME activity is said to have an outcome focus.
Most respondents surveyed working in the DME field have more than five years’ experience.
The type of DME activities performed are mostly midterm and final evaluations, desk research, project or programme evaluation and impact assessment.
More than half the respondents believe that data required for satisfactory evaluation is not easily accessible.
Work teams are the most commonly used type of work structure for conducting DMEs.
Within organisations undertaking DME work, the majority have fewer than five staff members devoted to evaluation, with most having at least an undergraduate degree and nearly 40% a master's degree or higher qualification.
Surveys and interview methods are the most commonly used methods or tools in DME work.
End-of-project, end-of-programme, annual or semi-annual and quantitative reports are the most common types of DME reports produced, with annual, quarterly and semi-annual being the frequencies most cited for producing reports in priority order.
The target audience of DME reports is an equal mix of internal and external stakeholders.
The most common type of training received by DME respondents was in M&E, followed by on-the-job training and RBM training.
Amongst the most common uses for DME reports are modifying plans, introducing improvements and communicating findings.
The main media for dissemination of DME reports are websites and online and physically to internal and external stakeholders.
DME work is perceived to have no effect on public policymaking in Egypt.
Amongst the main challenges faced are the lack of access and inaccuracy of data needed, plus the lack of appreciation for the value of DME in Egypt.
There is mixed opinion as to whether the 2011 revolution influenced the field of DME in Egypt. The main elements of the respondents’ vision for DME in Egypt include institutionalisation, increased awareness, participation, a stable enabling environment, provision of training, transparency and greater dissemination.
Elements required to achieve the vision recommendations include the presence of DME units in all government and NGO programs, provision of training and capacity-building to all concerned, focus on outcomes and impact planning, creation of an umbrella DME agency, allocation of sufficient budgets and advocation of the cause. Opinions differed as to the feasibility of realising the DME institutionalisation prerequisites, with some respondents being pessimistic, some optimistic and others skeptical and not sure.
To conclude, the post-revolution era in Egypt is characterised by a demanding public opinion that analyses and criticises each announcement. DME can provide more tangible results achieved and divulged by both organisations and the government. The Egyptian economy has many challenges to address and overcome. It is the opportune time, at the behest of two popular revolutions, for DME to be used effectively in each sector to result in a significant change that can eventually resolve these challenges and access development equitably to all Egyptians, including women and youth. Egyptians need to catch up with world trends in DME where citizens’ pressures are coupled with governments’ willingness and global perseverance to institutionalise and streamline. As eloquently phrased by one of the respondents’ to sum up the situation, ‘when there is a D (Development), we might think of the M and the E’.
Competing interests
The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.
Authors’ contributions
L.E.B. (American University in Cairo) was the project leader and, together with D.A.H. (IOCE) and N.W. (African Evaluation Association), worked on the project conceptual framework and the implementation of the survey instrument.
AfrEA, 2014, African Evaluation Association, viewed 16 July 2014, from http://www.afrea.org
Bird, G., 2001, ‘MF programs: Do they work? Can they be made to work better?’, World Development 29(11), 1849–1865. http://dx.doi.org/10.1016/S0305-750X(01)00077-8
Boyle, R., 2005, Evaluation Capacity Development in the Republic of Ireland, Operations Evaluation Department, The World Bank, Washington, DC.
Burdescu, R., Del Villar, A., Mackay, K., Rojas, F. & Saavedra, J., 2005, ‘Institutionalizing M&E systems in Latin American and Caribbean countries’, paper no. 11208, The World Bank, Washington, DC.
Care, 2014, Establishing an affiliated network for social accountability (ANSA) in the Arab world, viewed 01 October 2014, from http://www.care.org.eg/index.php?option=com_k2&view=item&layout=item&id=22&Itemid=93&lang=en
IDSC, 2014, Egyptian Information Decision and Support Center, viewed 15 July 2014, from http://www.idsc.gov.eg/AboutIdsc/IDSC_About.aspx
Kusek, J.Z. & Rist, R.C., 2005, Ten steps to a results based monitoring and evaluation system, viewed 01 October 2014, from http://www.oecd.org/derec/worldbankgroup/35281194.pdf
May, E. et al., 2006, Towards the Institutionalization of Monitoring and Evaluation Systems in Latin America and the Caribbean: Proceedings of a World Bank/Inter-American Development Bank Conference. World Bank Latin American and Caribbean Studies, viewed 01 October 2014, from http://elibrary.worldbank.org/doi/book/10.1596/0-8213-5823-5
Ministry for International Cooperation (MIC), 2013, Development Co-operation Report, 2012, Global Trends in ODA, MIC, Cairo, Egypt.
Ndikumana, L., 2012, Applying evaluation to development and aid: Can evaluation bridge the micro-macro gaps in aid effectiveness?, viewed 01 October 2014, from http://ideas.repec.org/p/uma/perips/article-leonce-ndikumana-.html
Segone, M., 1998, ‘Democratic evaluation’, working paper no. 3, UNICEF Regional Office for Latin America and the Caribbean, Santa Fe de Bogotá, Colombia.
Vestman, O. & Conner, R., 2008, The relationship between evaluation and politics. Briding the gap. The role of monitoring and evaluation in evidence-based policy making, UNICEF, Washington, DC.
|