Background: This article outlined the diversified history, the current state and future prospects of planning and evaluation in Senegal.Objectives: The aim was to nurture debate on the quest for a more ‘African-rooted evaluation practice’. Method: The article was based on an extensive grey literature review, the author’s involvement in SenEval and personal interviews. The literature on development evaluation and evaluation capacity development helped to frame the overall analysis. Results: Donor policies and practices have heavily influenced evaluation practice in Senegal but recent changes are shifting the emphasis to more context-specific practice. Some encouraging signs are the creation of a high-level commission for evaluation, the impulsion of results-based management in public administrations and the improved monitoring of poverty reduction strategies. Also promising are the individual evaluation capacities of some local actors and more diversified, professionalised training. The last flagship activities promoted by SenEval, a voluntary organisation of professional evaluators, and the prospects of its formalisation, could be a turning point in the development of evaluation in Senegal. Nevertheless, evaluation practice remains today focused more on accountability and control than on learning. Moreover, the institutional setup is not coherent and consolidated to ensure a perennial system to manage, conduct and use evaluations, ensuring their quality and inclusion in the policy cycle. Conclusion: We argued that SenEval has a significant role to play in boosting demand, strengthening the policy and institutional framework and promoting exchanges with the African and international evaluation community.
Le développement de l’évaluation au SénégalPrésentation: Cet article présente l’historique, le statut actuel et les perspectives de la planification et de l’évaluation au Sénégal. Objectifs: Alimenter le débat sur la quête d’une « pratique d’évaluation davantage africaine ». Méthode: L’article se fonde sur une analyse de la littérature grise, l’implication de l’auteur dans SenEval et des entretiens individuels. La littérature dédiée à l’évaluation du développement et au développement de la capacité d’évaluation a permis de structurer l’analyse. Résultats: Les politiques et pratiques des donateurs ont largement influencé la pratique de l’évaluation au Sénégal, mais les récents changements mettent l’accent sur une pratique davantage spécifique au contexte. La création d’une commission d’évaluation, l’impulsion fournie par la gestion axée sur les résultats dans l’administration publique et le meilleur suivi des stratégies de réduction de la pauvreté sont encourageants. Les capacités d’évaluation individuelles de quelques acteurs locaux et les formations plus diversifiées et professionnalisées sont également prometteuses. Les dernières activités promues par SenEval, une organisation bénévole de professionnels de l’évaluation, et leur éventuelle officialisation, pourraient constituer un tournant dans le développement de l’évaluation au Sénégal. Néanmoins, la pratique d’évaluation reste davantage axée sur l’obligation de rendre compte et le contrôle que sur l’apprentissage. De plus, la structure institutionnelle n’est ni cohérente, ni consolidée pour garantir un système pérenne permettant de gérer, réaliser et utiliser les évaluations, et garantissant leur qualité et leur intégration au cycle d’élaboration des politiques. Conclusion: SenEval a un rôle important à jouer pour stimuler la demande, renforcer le cadre politique et institutionnel et promouvoir les échanges avec la communauté de l’évaluation africaine et internationale.
The evaluation discipline, as it is defined today, originated in the USA, Canada, Germany, Sweden and the UK. It evolved and professionalised and in some contexts and has become institutionalised over the last thirty years. The emergence of ‘evidence-based’ policy making and growing pressure from civil society organisations have reinforced the need to better evaluate aid development. International organisations and bilateral donors have proposed evaluation frameworks and approaches for programs and projects. Evaluation practice in developing countries has been actively promoted within the development agenda during the past decades. In Africa, recent changes in societal context and endogenous demand from governments for country-led monitoring and evaluation (M&E) systems seem to be shifting the emphasis from donor-oriented evaluation to context-specific practice (Porter 2012). Some authors have identified some ‘indigenous evaluation practices’ that have originated and been developed in the ‘global south and having evolved beyond the direct influence of the donor community’ (Carden & Alkin 2012). They include the African Peer Review Mechanism and capitalización (developed in Latin America and broadly used in Francophone Africa as ‘capitalisation’). The debate around the challenge of integrating the evaluation needs of donors, country partners and beneficiaries has been vigorous. A recurrent critique of evaluation in Africa is that it has been focused on project-level accountability evaluations (a view expressed in different continental events since the 1990s). Responding to the Paris Declaration (2005), the Accra Agenda for Action (2009) and the Fourth High Level Forum on Aid Effectiveness in Busan (2011), evaluation policies and practices of donors and multilateral agencies have recently tried to evolve towards strengthening national M&E systems and sector-wide evaluations. Country-led evaluation (CLE) is usually defined as a process of evaluation wherein the government determines which evaluations will be carried out, and is responsible for managing and implementing them, as opposed to externally driven evaluations. This is in line with the definition of ‘evaluation capacities’ (Organisation for Economic Co-operation and Development 2006, 2011): the ability of people and organisations to define and achieve their evaluation objectives at individual and organisational levels, enabled by a supportive environment to foster demand, supply and use of evaluation. This also includes the authority to set the evaluation agenda and to determine what is evaluated and what questions are asked (Segone et al. 2013). This article analyses the recent trajectory of policy, program and project planning and evaluation undertaken by Senegalese institutions within the context of development evaluation trends and the current quest for a more ‘African-rooted evaluation practice’ (African Evaluation Association 2007). ‘Evaluation practice’ is defined here as the planned and actual evaluations, considering their institutional context, main stakeholders involved and their capacities. In the absence of a comprehensive national evaluation policy or a central agency in charge of evaluation in Senegal, past and current evaluation practices of different stakeholders are used to generate implications for the Senegalese and the wider African evaluation community.
Taking stock of the institutional evaluation practice in Senegal
|
|
Senegal started a formal national planning system after independence in 1960. The evaluation function was not strong during those early years. At the end of the 1970s the Structural Adjustment Program (SAP) brought significant changes in planning (and evaluation) systems. During that period, evaluation was focused on macroeconomic performance using mainly macroeconomic indicators such as the debt levels, budget expenditures, the monetary situation and external exchanges (consistent with donor conditionalities). Some monitoring units were created in different ministries, such as the Unité de Politique Agricole. However, this opportunity did not entail the promotion of M&E of the overall performance of sector policy from a systemic and global perspective, favouring instead a short-term or medium-term program and project approach (Diallo 2009; Diallo pers. comm., October 2012; Lom 2008).An internal reflection within the National Planning Ministry sought ways to articulate the SAP with the national planning instruments. A new national planning system was then promoted in 1987 to ensure the relevance and effectiveness of public investments
(see Figure 1). The Plan for the Orientation of Economic and Social Development (PODES) required every technical ministry to elaborate its sector policy
letter, which provided the basis for an action plan with a list of projects and programs. A first level of project and program ex-ante evaluation was
done by the ministry in charge, and the Ministry of Planning ensured afterwards their coherence with the PODES and the Sector Policy Letter
(Lettre de Politique Sectorielle, LPS). For the economically profitable projects, the software EVA was used to calculate the economic impact of the project.
For the social sector (called ‘non-productive public projects’), only a mainly descriptive template focused on recurrent costs was used
(Diallo 2009). Afterwards, the Direction de la Coopération Economique et Financière (DCEF) conducted an ex-ante evaluation.
|
FIGURE 1: Planning and ex-ante evaluation system proposed in 1987.
|
|
Once the project was approved, DCEF was in charge of monitoring using ‘project annual execution bulletins’ (see figure 2). In practice only a small
sample of projects were physically monitored (using private consultants) with a focus on financial monitoring. Technical reviews were undertaken when the
technical and financial partners (TFP) required them. The National Planning Ministry was in charge of ex-post evaluation of projects, but resources were
too limited to systematise this practice.
|
FIGURE 2: Monitoring and ex-post evaluation system proposed in 1987.
|
|
According to the main stakeholders during that period, this new planning system did not function correctly because of lack of respect for the procedures, tools and institutional frameworks. The selection committee and the methodological guide that should have reinforced this system were never operational, so the DCEF approved the donor-funded projects. Moreover, a considerable number of projects were ‘outside the Plan’ due to parallel donor action or the decisions of inter-ministerial councils (Lom 2008). Changes in aid evaluation around the turn of the century had an impact on the instruments and focus of the planning and evaluation practice in Senegal, mainly the first cycles of the Poverty Reduction Strategies (PRS), introduced by the World Bank and the International Monetary Fund (IMF) in the context of the HIPC Initiative. PRSs are considered to be the reference documents in terms of economic and social policy in many African countries, often linked explicitly to the Millennium Development Goals (MDGs). They are intended to be more partnership-oriented than the SAP, thus requiring more national evaluators and the participation of the civil society of the south. In Senegal, there have been three PRSs since 2003. The most recent is called DPES – Document de Politique Economique et Sociale, 2011–2015 – and it is articulated with the MDGs and guidelines from New Partnership for Africa’s Development (NEPAD) and the African Union (AU). The institutionalisation of the monitoring of public policies through the PRS-II is considered a major innovation, especially interesting in the education, water and sanitation and health sectors (République de Sénégal 2011). However, as in many African countries, progress in terms of evaluation has been more timid, and monitoring systems seem still to be designed to meet donor reporting requirements (Segone et al. 2013). Senegal has also joined African initiatives to assess national performance, notably the African Peer Review Mechanism (APRM), established in 2003 by the AU in the framework of the implementation of NEPAD. APRM is a voluntary self-evaluation tool to ensure country policies and practices are in line with the values, codes, norms, criteria and indicators of political, democratic and economic good governance. Senegal has been a member of the APRM since 2004, but the review has not been done yet. The National Evaluation Capacity agenda is conceived as part of good governance efforts (Segone et al. 2013:17). In Senegal the National Program of Good Governance (NPGG) has comprised three phases since 2003. This has been piloted by DREAT (Délégation Chargée de la Réforme de l’Etat et l’Assistance Technique), a management reform agency attached to the presidency which advises all branches of government on improving M&E and Results-based Monitoring (RBM) policy and practice in the framework of the governance reform programme (Ndiaye & Aw 2012). Amongst the components of the NPGG, the ‘institutionalisation of evaluation’ is a priority; amongst the expected outcomes is the creation of an ‘organ for public policy and strategies evaluation’, as well as capacity-building efforts in the technical ministries in order to strengthen their units in charge of ‘policy, planning and evaluation’ (Government du Sénégal c. 2002). Moreover, DREAT included amongst the strategic thrusts of the State Reform Framework (2011–2015) the need to institutionalise public policy evaluation, including a legal and regulatory framework and a structure to promote it (DREAT 2010). As will be discussed later, DREAT’s work has helped to position evaluation on the political agenda, although the institutional design of a coherent M&E system is still in its early stages (Ndiaye & Aw 2012). The weaknesses of national planning and evaluation have been identified by different authors and institutions (GTZ 2010). Diallo (2009) awarded a low score in terms of capacities of the main governmental institutions in charge of planning and evaluation. According to this research, based on interviews with key actors of those institutions, the National Planning Direction (Direction de la Planification Nationale in French, DPN) obtained the lowest score, followed by technical ministries and the DCEF. The overall score of 0.89/9 led him to conclude that ‘the involved actors do not play their roles in the public investments planning and evaluation system’. As part of their Policy Support Instruments (PSI), the IMF (2007, 2011) recommended reinforcing the responsibility for the evaluation of projects and programs of the Director General of Planning, through the DPN. At the same time, it highlighted the responsibility of the technical ministries to ensure their planning, monitoring and evaluation function. Four ministries were chosen to pilot the creation of planning (and evaluation) structures: Education, Health, Environment and Agriculture. Targets in 2008 and 2009 in terms of ex-ante evaluation using cost-benefit analysis were not achieved due to the lack of resources to train the ministry staff. A promising, although limited, advance can be seen at the level of the DPN. It has the mandate to conduct the evaluations of development projects and programs. Since 2008, DPN has managed around 20 evaluations, mainly of national execution projects funded by the United Nations Development Programme (UNDP). DPN has managed them from the drafting of the terms of reference (ToR) to the acceptance of the evaluation report, chairing the evaluation steering committees. DPN also summarises the reports and issues a note to the Ministry of Economy and Finances, which usually follows up with the sector ministry. Overall, the IMF acknowledged some modest progress by mid-2011 with regard to the planning, evaluation and selection of public investment projects, notably the drafting of the ‘Project Preparation Guide’, the elaboration of an evaluation guide using the cost-advantage method, and the ex-post analysis of two completed projects (technical analysis of economic and social returns).
Recent landmarks in evaluation in Senegal
|
|
Senegal is part of the ‘amazing growth in Voluntary Organizations of Professional Evaluators (VOPEs) around the world’ (Rugh 2013:13). According to mapping exercises by the International Organisation for Cooperation in Evaluation (IOCE) and EvalPartners, by March 2013 there were 100 national, 12 regional and 11 internationals VOPEs. About 30 are national VOPEs in Africa, along with two regional ones (the African Evaluation Association, AfrEA, and the Middle East and North Africa Evaluators Network, MENA). This is a steep progression from the five regional and national evaluation organisations that existed in the world before 1995 (Russon & Russon 2000). The Senegalese evaluation network (SenEval) was established in 2003, after a UN-promoted workshop on M&E in Dakar. SenEval was conceived as a flexible and dynamic organisation to promote the culture of evaluation through the exchange and diffusion of information amongst members and with the exterior. The members interested in SenEval have grown steadily (more than 350 people were subscribed to the bimonthly bulletins in 2013). They are from diverse origins, including ministries and other governmental structures, universities and training and research institutions, think tanks and consulting companies, UN agencies, donors and NGOs, as well as individual practitioners. Amongst the main activities envisaged in the SenEval’s charter (SenEval c. 2003) are: the sensitisation of different actors to foster a critical reflection about the challenges of evaluation and its relationship to governance and to disseminate evaluation norms and standards, the promotion of the institutionalisation of evaluation, support for the training of key actors, and to provide methodological support and exchange of practices in the M&E domain. Two important activities supported by SenEval during its early years were the study on evaluation capacities (Varone et al. 2007a, 2007b) and the Senegalese Evaluation Days in 2008. The study of evaluative capacities analysed evaluation ‘declared practice’, identified the strengths and weaknesses of the evaluative capacities following a metaevaluation approach, and defined different scenarios for the institutionalisation of the evaluation function. It focused on the actors and structures involved in the management of PRS. The study was conducted simultaneously in two other pilot countries, Niger and the Republic of
Congo. Table 1 summarises the stages, the tools developed for every stage and the main results of each stage.
TABLE 1: Summary of the study of evaluative capacities in Senegal.
|
The main findings of the study were presented at the AfrEA Conference in Niamey in 2007. The authors acknowledged several limitations of the study in Senegal: (1) regarding the survey, low response rate and over-representation of the public administration evaluation practice and institutions which are those already doing evaluation; (2) regarding the interviews, problems of representativeness of the responses, since a very small sample was used. It was also evident from the responses that a common understanding of the concept of ‘evaluation’ was not shared by the interviewees, who were mixing monitoring with evaluation, and still other concepts related to RBM. (3) Regarding the metaevaluation, the results were very weak in the three countries, offering some interesting data in Niger but being especially partial and non-conclusive in the case of Senegal. Regarding the ECD plan, some general work lines were identified, but again the Senegalese case was considered in the synthesis report as the weakest.In October 2008, the first Senegalese Evaluation Days (Journées Sénégalaises de l’Evaluation, JSE) were held in Dakar, organised by a committee chaired and supported by DREAT with the participation of SenEval. There were 28 communications and four round tables. The event endorsed a draft action plan based largely on the recommendations of the diagnostic study on the institutionalisation of evaluation of public policy in Senegal. There was also an exhibition of the evaluation work of 14 institutions, including ministries, UN agencies, training and research centres, civil society organisations and private evaluation firms. A total of 267 participants attended. The JSE considered that the national context was increasingly favourable to the emergence of an evaluation culture, but that it was necessary to strengthen the political will, as well as the capacities of the different stakeholders, the mobilisation of resources, citizens’ participation and the effective utilisation of the evaluations. The action plan was based on four axes: strengthening the demand for the evaluation of public policies, strengthening the supply of evaluation, institutionalising the evaluation of public policies, and the strengthening of SenEval (SenEval OECD 2008). The action plan did not become operational and there was little follow-up in 2009 and 2010. From the start of 2011 SenEval relaunched its activities and organised several meetings and trainings. Bimonthly e-newsletters are now sent regularly and a virtual platform with key documents is now available, although it is still underused. SenEval has functioned through the volunteer spirit and goodwill of a small core of people, with no formal leadership, no budget and no staff. SenEval held a general assembly in October 2012 to formally establish an evaluation association, in the place of the network, and to elect officers and members of the coordination committee. The draft Strategic Plan 2013–2015 has three main axes: strengthening the enabling environment (conferences, high-level seminars and advocacy with media), professionalisation and capacity building of evaluation actors (training seminars, information sharing, harmonisation of professional norms and standards)s and research promotion (partnerships with universities and research centres, publications, support to publishing in academic journals). EvalPartners is also supporting the development of a peer-to-peer initiative between SenEval and the Quebec Programme Evaluation Society (SQEP). Since its creation, SenEval has advocated for the institutionalisation of evaluation targeting principally the presidency of the republic, DREAT, the General Directorate of Planning of the Ministry of Economy and Finances, and the Government Inspection Office (Inspection Générale d’Etat). This has contributed to the government’s decision in March 2012 to establish in the president’s office a Commission for the Evaluation and Monitoring of Public Policies and Programmes (Diop et al. 2013). It is noteworthy that the name of this commission puts ‘evaluation’ first, instead of the more common M&E. According to interviews conducted with key actors in early 2013, the structuring of the mandate and the membership remains to be defined. Another interesting opportunity for evaluation practice in Senegal and the sub-region is the recent selection of Dakar-based CESAG (Centre Africain d’Etudes Supérieures en Gestion) by the CLEAR Initiative as the Francophone Centre of Excellence in Evaluation. CESAG is a tertiary-level management training institution, which offers a diploma (diplôme d’études supérieures specialises, DESS) in project management, including a significant emphasis on M&E. CLEAR (Regional Centers for Learning on Evaluation and Results) is a multiregional initiative to strengthen national M&E and performance management capacity in order to achieve development outcomes. This accomplishment is felt to be a collective success by the Senegalese evaluation community (the network, researchers, CESAG and others). According to some informal exchanges with the CESAG team, they aim at improving the evaluation practice in the sub-region through the creation of a critical mass of professionals and trainers in evaluation, as well as offering support to national evaluation systems (and national evaluation networks or associations) and promoting applied research.
Brief diagnosis of the current state of evaluation in Senegal
|
|
The ‘systemic and integrated approach to National Evaluation Capacities Development, NECD’ (Segone et al. 2013:22) has been used to critically review the current state of evaluation in Senegal. This model focuses on three complementary levels: the enabling environment, the institutional framework and the individual level. The study on the evaluative capacities endorsed a similar approach (SenEval 2008): the macro level is conceived as the ‘institutional approach’ (agency to frame and promote evaluation practice at the national level), whilst the meso level is the ‘organisational approach’ (integration of evaluation practice in the administration, including both central ministries, and local collectivities) and the micro level is the ‘technical approach’ (which should facilitate quality evaluation practice through standards and methodologies).In relation to ‘the enabling environment for evaluation which provides a context that fosters (or hinders) the performance and results of individuals and organisations’, some encouraging signs can be found in terms of the decision by the newly elected president in 2012 to establish the Commission for the Evaluation and Monitoring of Public Policies and Programmes, probably influenced also by the advocacy efforts of DREAT and the Evaluation Days in 2008. The experience regarding the reform of public finance management and procurement systems, in relation to RBM, should be capitalised upon. Similarly, the strengthened National Office for Statistics and Demography (ANSD) is an asset for ensuring improved availability of data for the PRS and other monitoring. There are signs of a growing evaluation culture, although more focused on monitoring, and emphasising evaluation more for accountability or control than for learning (Ndiaye & Aw 2012). The government demand for evaluation is increasingly evident in official discourse if not yet in practice. By mid-2013 there was no formal evaluation policy in Senegal, nor evaluation standards or norms nationally. This leads to heterogeneous practice using a variety of donor’s standards and practices. The multiplicity of institutions commissioning, and to a lesser extent conducting, evaluations reflects the lack of clearly assigned roles and responsibilities, hindering attempts to find synergies and complementarities. Interesting practice has been accumulated by different institutions, for instance, the National Planning Department. Moreover, the monitoring experience gained through the progress reports on the PRS and sectoral programs could be consolidated and strengthened from the evaluation point of view. More should be done to empower other stakeholders in demanding evaluation of public policies (citizens, civil society organizations, etc.). An interesting window of opportunity is available for the Senegalese evaluation community through the EvalPartners-supported peer-to-peer exchange with SQEP, and informal exchanges through CLEAR–CESAG activities. The second aspect of the NECD approach deals with the ‘institutional framework, … the system and structures needed to perform and attain results individually as well as collectively as an organisation’. There is no formal centralised evaluation function, as it is dispersed amongst different structures (Ndiaye & Aw 2012), with no clear quality assurance of evaluations (apart from the limited experience of DPN with the evaluation of UNDP-supported national execution projects). The numbers and the expertise in evaluation of staff in ministries and the DPN seem to be insufficient, despite some recent training. The analysis of 14 key African evaluation events from 1990 to 2012 shows an uneven participation of key Senegalese evaluation players, and an absence of restitution events once back in Senegal. The Senegalese participation at those events was mainly from donor’s agencies, research institutions and consultants, with the exception of representatives of the Ministry of Education and the DPN. There is no system in place to report on evaluation findings or to follow evaluation recommendations. The author’s experience in identifying a sample of 50 evaluations conducted in the past 12 years in Senegal in the sector of agriculture and environment seems to indicate that evaluation reports are not kept by line ministries or any other central unit. In response to requests, the representatives of government departments tend to suggest contacting the donor who funded the evaluation in order to get the final report, suggesting limited evaluation utilisation. Finally, at the individual level (‘knowledge, skills and competencies to perform tasks and manage processes and relationships’), the situation in relation to the NECD framework is as follows. The capacities to manage evaluations independently and credibly at a senior level in ministries still need to be reinforced in order to ensure a nationally led process. National evaluators are routinely engaged in teams of evaluations usually commissioned by donors, although there is no consolidated database of consultants apart from a roster used by the DPN (with about 50 consultants in 2013). SenEval started this endeavour, and also received more than 50 profiles, but they have not yet been screened or cross-checked with those in the DPN database. The supply of evaluation training was analysed by Traoré (2008), in a presentation at the Senegalese Evaluation Days. Through a survey of seven training institutions he concluded that M&E training was usually integrated in broader academic training programs (usually a bachelor’s or master’s degree), with no specific certified training programme on public policy, programme and project evaluation. Different short courses (less than 90 hours) on program and project M&E were available. The usual curricula comprised: evaluation process, indicators, data collection and analysis methods, economic evaluation, and impact evaluation. Interestingly, he found weak internal capacities in the interviewed training institutions (number of in-house trainers in evaluation), with external expertise not always available, especially in public policy evaluation. The current situation does not seem to have significantly changed, although the establishment of the CLEAR–CESAG centre has created some expectations. SenEval has also organised ad hoc trainings since 2011. Although not quantified, SenEval’s newsletter has also fostered the participation of members in different online evaluation trainings and their participation in formal courses. SenEval ‘core members’ have accompanied and mentored other members for their participation in international conferences and for publishing their work related to evaluation in specialised journals.
Implications for the Senegalese and the African evaluation community
|
|
Senegal has accumulated much diversified planning and evaluation experience that has been influenced by development aid trends over the years. In particular, the evaluation policies and practices of donors and multilateral agencies have heavily influenced evaluation practice. There is an increasing recognition of the key role of evaluation in national development processes. Strengthening national evaluation capacities remains central to the good governance agenda (Segone et al. 2013). Although some authors find that evaluation practice in Senegal started to respond to the continental trend towards more context-specific country-led M&E systems (Porter 2012), the majority of evaluations are still undertaken in alignment with the donor project cycles and public policy evaluations are still rare (Ndiaye & Aw 2012). On the one hand, in Senegal, as in 70% of African countries (Segone et al. 2013), an interesting experience has been gained through the monitoring (and to a lesser extent, the evaluation) of PRS. The role of certain government structures, like DREAT, and the limited but interesting experience of nationally led evaluations (like the DPN) are points to be consolidated and improved. On the other hand, there is no formal centralised M&E function and evaluation practice continues to be too heterogeneous to assess its quality and utility. The experience of SenEval, the evaluation network, has been chosen from amongst 15 other prominent cases for inclusion in a recent publication about the growth and consolidation of VOPEs in the world (Rugh 2013). Although SenEval is still in the process of obtaining official recognition as a formally constituted association (like 15% of the 123 VOPEs listed by EvalPartners and IOCE in 2013), it seems to have gone beyond the phase where VOPEs are focused on individual skills development. Efforts to strengthen its institutional capacity are being made. This should lead to a better capacity to boost not only the supply side of evaluation (capacities of members to conduct evaluations) but also the demand (request for evaluations and better management of evaluations, both from donors, national government and civil society). Nevertheless, obvious constraints remain, related to the lack of funds and the reliance on the motivation and commitment of a limited group of people working on a voluntary basis. The experience has shown the importance of key high-level events to boost the demand of evaluation, like the Senegalese Evaluation Days and the study of evaluation capacities. Their second editions could create a momentum to build on with the leadership of SenEval and other key actors mentioned in this article. Recurrent debates have been raised during African conferences (AfrEA 2007, amongst others) around ‘making evaluation our own’, ‘made in Africa evaluation’, ‘African-rooted evaluation’. Nevertheless, it appears that credible indigenous ways of thinking and doing evaluation within the African community are still to emerge (Traoré & Wally 2013). Interesting efforts have been launched, such as the AfrEA evaluation capacity-building project in 2010 and the African Thought Leadership Forum on Evaluation and Development in Bellagio, supported by AfrEA and the CLEAR Centre in South Africa. The African Evaluation Journal is also an example of this endeavour. SenEval and other African VOPEs have a key role to play in reinforcing local evaluation practices, as well as connecting continental and global initiatives like AfrEA, CLEAR and EvalPartners with the national evaluation dynamics. This should foster a permanent and fluid exchange amongst African countries and promote other South–South cooperation. Research institutes, public administration and civil society should also be actively involved in shaping this new scenario in which donors are being definitively displaced from the driver’s seat of the evaluation journey, allowing the evaluation community to choose their future destination. This could offer opportunities to the wider African community in the quest for endogenous evaluation processes, which could finally bring more ownership and utilisation of evaluative work.
Thanks are due to Ian Hopwood for his invaluable advice and support since the beginning of this research. The author is also thankful to the directors of her research, Dr Maria Bustelo and Dr Jordi Morato. Ababacar Diallo of DPN and Babacar Diakahte of DREAT have generously shared their views and non-published materials to enrich this article. The wider community of SenEval, and its core group of active members, has provided incredible support through this journey; a real partnership and a two-way learning process has been established since 2011. This research is much more stimulating thanks to their comradeship and encouragement.
Competing interest
The author has been personally involved in SenEval since 2011 through volunteer work. She is one of the core active members engaged in the promotion of National Evaluation Capacities, especially in communication and knowledge-sharing through the electronic newsletter. She also works fulltime in M&E for the UN. No data accessed through her position as a UN staff member in Dakar has been used for this article or any other part of her PhD research.
African Evaluation Association (AfrEA), 2007, ‘Making evaluation our own: strengthening the foundations for Africa-rooted and Africa-led M&E’, Summary of a Special Conference Stream and Recommendations to AfrEA, UNDP, Niamey, Niger, 17-19 January,2002.Carden, F. & Akin, M.C., 2012, ‘Evaluation Roots: An International Perspective’, Journal of MultiDisciplinary Evaluation 8(17), 102–118. Diallo, A., 2009, ‘Contribution à l’évaluation de la mise en œuvre du système de planification et d’évaluation de l’investissement public au Sénégal’, Master’s thesis, CESAG, Dakar. Diop, M., Faye, S.S., Hopwood, I., Kinda, O., Lomeña, M., Boumas, G. et al., 2013, ‘SenEval - A Decade of Advocacy and Action for Evaluation in Senegal’, in J. Rugh & M. Segone (eds.), VOPE’s. Learning from Africa, Amerias, Asia, Australasia, Europe and Middle East, pp. 249–261, Evaluation Working Papers #9, UNICEF, EvalPartners, IOCE. DREAT, 2010, Schéma Directeur de la Réforme de l’état 2011-2015. Une administration moderne, axée sur les résultats de développement au service du citoyen, Secrétariat Général de la Présidence de la République du Sénégal, Dakar. Government du Sénégal, c. 2002. National Program of Good Governance, document de projet, viewed 14 May 2013 from http://www.gouv.sn/Programme-national-de-bonne.html GTZ, 2010, Diagnostic préliminaire de la situation de la Direction Générale du Plan (DGP), Government of Senegal, Dakar / Cologne. International Monetary Fund (IMF), 2007, Senegal: Request for a Three-Year Policy Support Instrument—Staff Report; Staff Statement; Press Release on the Executive Board Discussion; and Statement by the Executive Director for Senegal, Author. International Monetary Fund (IMF), 2011, Senegal: Letter of Intent, Memorandum of Economic and Financial Policies, and Technical Memorandum of Understanding, Author. Lom, A.D., 2008, ‘Expérience sénégalaise en matière d’évaluation, contraintes et perspectives’, presentation at Journées Sénégalaises d’Evaluation, Dakar, 28–30 October. Ndiaye, M.A. & Aw, B., 2012, ‘The M&E system in Senegal’, in CLEAR, African Monitoring and Evaluation Systems. Exploratory Case Studies, pp. 94–139, Graduate School of Public and Development Management, University of the Witwatersrand, Johannesburg. Organisation for Economic Co-operation and Development (OECD), 2006, The Challenge of Capacity Development: Working Towards Good Practice, Author, Paris. OECD, 2011, Evaluation Network. Tip-sheet. Supporting Evaluation Capacity Development, viewed 26 April 2013 from
www.oecd.org/dac/evaluationnetwork.
Porter, S., 2012, ‘The growing demand for M&E in Africa’, in CLEAR, African monitoring and evaluation systems. Exploratory case studies, pp. 6–19, Graduate School of Public and Development Management, University of the Witwatersrand, Johannesburg. République du Sénégal, 2011, Document de Politique Economique et Sociale, DPES 2011-2015, Author. Rugh, J., 2013, ‘The growth and evolving capacities of VOPEs’, in J. Rugh & M. Segone (eds.), VOPE’s. Learning from Africa, Amerias, Asia, Australasia, Europe and Middle East, pp. 13–32, Evaluation Working Papers #9, UNICEF, EvalPartners, IOCE Russon, C. & Russon, K., 2000, The annotated bibliography of international program evaluation, Kluwer Academic Publishers, New York.
http://dx.doi.org/10.1007/978-1-4615-4587-3, PMid:10768188 Segone, M., Heider, C., Oksanen, R., de Silva, S., Sanz, B., 2013, ‘Towards a shared framework for national evaluation capacity development’, in M. Segone & J. Rugh (eds.), Evaluation and civil society. Stakeholders’ perspectives on National Evaluation Capacity Development, pp. 16–42, Evaluation Working Papers #8, UNICEF, EvalPartners, IOCE. SenEval, c. 2003. Charter of SenEval, Author. Seneval, 2008, Proceedings of the Journées Sénégalaise d’Evaluation, viewed 14 May 2013 from
http://www.evaluation.francophonie.org/spip.php?article664.
Traoré, A., 2008, ‘Renforcer l’offre de formation pour répondre aux nouveaux enjeux de l’évaluation’, in Table Ronde 4: Professionnalisation de l’évaluation, Journées Sénégalaise de l’Evaluation, viewed 31 May 2013 from http://www.evaluation.francophonie.org/IMG/pdf/Renforcer_l_offre_de_formation.pdf
Traoré, I.H. & Wally, N., 2013, ‘Institutionalization of evaluation in Africa: the role of AfrEA’, in J. Rugh & M. Segone (eds.), VOPE’s. Learning from Africa, Amerias, Asia, Australasia, Europe and Middle East, pp. 249–261, Evaluation Working Papers #9, UNICEF, EvalPartners, IOCE. Varone, F., De Muynck, E., Lo, A.K., Ndoye, O. & Van Ufford, P.Q., 2007a, L’évaluation comme exigence démocratique. Etude diagnostique des capacités évaluatives au Sénégal, viewed 28 April 2013 from
http://evaluation.francophonie.org/IMG/pdf/Capitulo1_Seneval_daz.pdf.
Varone, F., De Muynck, E., Lo, A.K., Ndoye, O. & Van Ufford, P.Q., 2007b, L’évaluation comme exigence démocratique. Etat de la pratique déclarée : résultats de l’enquête,
http://evaluation.francophonie.org/IMG/pdf/Capitulo2_Seneval.pdf
|