About the Author(s)


Kambidima Wotela Email
Graduate School of Governance (WSG), University of the Witwatersrand (WITS), South Africa

Citation


Wotela, K., 2017, ‘A proposed monitoring and evaluation curriculum based on a model that institutionalises monitoring and evaluation’, African Evaluation Journal 5(1), a186. https://doi.org/10.4102/aej.v5i1.186

Original Research

A proposed monitoring and evaluation curriculum based on a model that institutionalises monitoring and evaluation

Kambidima Wotela

Received: 14 Nov. 2016; Accepted: 27 Feb. 2017; Published: 12 Apr. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: African politicians, bureaucrats and technocrats have thrown their weight in support of monitoring and evaluation (M&E). This weight has compelled training institutions to add M&E to their offerings. Most often at the end of these training programmes, attendees know what they have learnt but seem not to internalise it and, worse, they hardly ever put their newly acquired knowledge into practice. This allegation has led to what we term ‘monitoring and evaluation training hopping’ where participants move from one training to another hoping that they will eventually fully comprehend the skill and apply it to their work. This rarely happens and as such participants often blame themselves and yet the problem is with the training institutions that are teaching the middle-third tier (how to monitor and evaluate) as well as the bottom-third tier (data and information management). However, the top-third tier that links M&E to ‘the what’ and ‘the how’ as well as ‘the why’ in the development intervention and public policy landscape is missing.

Objectives: To propose a M&E curriculum that institutionalises M&E within implementation and management of development interventions.

Method: We use systems thinking to derive the key themes of our discussion and then apply summative thematic content analysis to interrogate M&E and related literature. Firstly, we present and describe a model that situates M&E within development and public policy. This model ‘idealises or realises’ an institutionalised M&E by systematically linking the contextual as well as key terms prominent in established descriptions of M&E. Secondly, we briefly describe M&E from a systems thinking approach by pointing out its components, processes, established facts, as well as issues and debates. Lastly, we use this model and the systems thinking description of M&E to propose an institutionalised M&E curriculum.

Results: Our results show that for an explicit understanding of M&E, one needs to understand all three tiers of M&E. These are development interventions and public policy (top tier), M&E concepts, terminologies and logic (middle tier) and data collection and storage, data processing and analysis, reporting and some aspects of integrating the findings into planning, implementation and management (bottom tier).

Conclusion: Unless we offer an all-round M&E training, we will not move beyond monitoring and evaluating development interventions for compliance.

Introduction

At some point, the reason underlying absent or ineffective monitoring and evaluation (M&E) of development interventions in some African countries was lack of political will and resistance by influential bureaucrats and technocrats. This is partly because M&E provides information that is not desirable politically (Baradei, Abdelhamid & Wally 2014). However, as Porter and Goldman (2013) point out, in recent years, there has been a growing demand for evidence-based decision-making among politicians and bureaucrats – more so the former, as the continent becomes more democratic and citizens increasingly demand accountability from their ruling elite (Baradei et al. 2014).1 Therefore, Porter and Goldman (2013) point out that politicians in Benin, South Africa and Uganda have thrown their weight in support of M&E. To this list, we can add the Kenyan, Ghanaian, and Rwandese politicians. Inevitably, it seems the political weight is a little too much for the civil servants who have eventually taken to monitor and evaluate development interventions for compliance.

However, this undesirable tendency to undertake M&E for compliance reasons is not limited to government departments and public institutions only. Instead, the political weight has also facilitated demand for training in M&E. This has compelled training institutions to jump onto the band wagon to teach M&E. Most often at the end of these training programmes, attendees do know what they have learnt but seem not to internalise this knowledge and worse still they cannot put their newly acquired knowledge into practice. This has led to what we term ‘monitoring and evaluation training hopping’, where participants move from one training institution or workshop to another in the hope that they will eventually decode this supposedly new technique and more importantly apply it to their work. Unfortunately, this desire has eluded them even after the next training and the next workshop and the next, because the constraint is not the conceptual ability of the participants. Instead, it is the training institutions that have packaged operations management and sometimes some features of strategic management or similar and called it ‘monitoring’. Similarly, these institutions have packaged applied research skills and anything similar and called it ‘evaluation’. Further, they have taught these modules without linking M&E to ‘what it is’ and ‘what it should be doing’ as well as ‘how’ and ‘why’ in the context of development interventions, often mentioning that M&E is a management tool for decision-making but omitting or failing to mention or detail management. Unfortunately, participants still blame themselves for not decoding M&E as well as failing to apply this function to their work after attending a training session, if not several.

In this article, we introduce a model – derived elsewhere, using systems thinking – meant to institutionalise public management and M&E arrangements within the developmental and public policy landscape. We derived this model to contextualise M&E. Obviously, contextualising and, therefore, institutionalising M&E has capacity building implications. To cater for this, we briefly describe M&E: components, processes, established facts, as well as issues and debates, before proposing an institutionalised M&E curriculum.

The model: The relationship between development, public policy and the key stages of the public policy cycle framework

Using Gharajedaghi’s (2006) systems methodology and Fisher’s (1983) ‘devising seminars’, Wotela (2016) has proposed six questions for decoding an academic field of study. The six questions are (1) ‘what is [insert field of study of interest]?’, (2) ‘what is the purpose of [insert field of study of interest]?’, (3) ‘what are the components (structure and function) of [insert field of study of interest]?’, (4) ‘what are the processes in [insert field of study of interest]?’, (5) ‘what are the established facts in [insert field of study of interest]?’, and (6) ‘what are the key issues and debates in [insert field of study of interest]?’ Note that in teaching academic fields of studies, most modules pursue questions 1, 2, 5 and 6. Omission of questions 3 and 4 implies that one cannot link the various elements, components and processes in an academic field of study nor link such a field to another. This is crucial to M&E because it has various elements, components and processes. Elements include inputs, activities, outputs, outcomes, impact, as well as indicators, baseline values, target values, assumptions and risks while including the results chain and results framework as well as the different types of evaluation. Data collection, processing, analysis and reporting comprise some of the key elements of M&E. Therefore, to be comprehensive as well as critical, one has to discuss this field of study using the six questions. This is why we discuss M&E components, processes, established facts, as well as issues and debates before we propose the curriculum.

Further, M&E is not an island; it is linked to other fields of study, most notably development, public policy and governance. In a forthcoming article, we have used Wotela’s (2016) six questions to link the various elements, components and processes of fields of study that are key to M&E as well as link these fields of study together and link them to M&E. More specifically, ‘what is development?’, ‘what are the components (structure and function) of development?’ and ‘what are the processes in development?’ as well as ‘what is public policy?’, ‘what are the components (structure and function) of public policy?’ and ‘what are the processes in public policy?’ In doing so, we applied Gharajedaghi’s (2006:107) systems methodology to see ‘through the chaos and understand the complexities’ of M&E. After deriving the initial model, we subjected it to Fisher’s (1983) ‘devising seminars’ for iteration until we reached the model presented in Figure 1. The main objective of the forthcoming article is to contextualise M&E in an effort to foster good governance. However, its resulting model nests and justifies the curriculum that we propose in this paper. For this reason, we provide a brief description of this model sufficient enough but without necessarily reproducing the other forthcoming article. With missing details, the model may appear to be complex. Should that be the case, then we recommend that one reads the other article as well.

FIGURE 1: Illustrating the relationship between development (and its components and processes), public policy (and its components and processes) and the key stages of the public policy cycle framework.

Fundamentally, Figure 1 links development (its components and processes), public policy (its components and processes) and the specified stages of the public policy cycle framework. First, development, which fundamentally entails ‘change’ in the short term, medium term and the long term (Summer & Tribe 2008) to a relatively desirable human aspiration (Slim 1995), has five components. These are cultural development (Burkey 1993; Gereffi & Fonda 1992; Jack-Akhigbe 2013), political development (Acemoglu & Robinson 2012; McFerson 1992), economic development (Sachs 2005), social development (Gray 2006; Roseland 2000; Watson 2012) and environmental development (Roseland 2000; Slim 1995). Second, as Summer and Tribe (2008) have argued, development has two processes: immanent (unintentional or natural) and imminent (intentional or willed). It is the latter that we refer to as development interventions in M&E. From a M&E point of view, one can look at immanent development as either assumptions or risks described in a results framework. Third, a development intervention and, therefore, M&E can be at policy level, programme level or project level (Kusek & Rist 2004). For us to understand these three levels of development interventions, we need to appreciate public policy before anything else.

Fourth, public policy – described by Jann and Wegrich (2007) as an applied social science discipline that uses multiple methods of inquiry and arguments to identify, formulate, implement and evaluate development interventions – points to the ‘how’ of the ‘what’ – with the ‘how’ being public policy and the ‘what’ being development. It also has five components, namely leadership (Simeon 1976), governance (Hill and Hupe 2014), political economy or simply macroeconomics (Simeon 1976), institutional arrangements or analysis and organisational arrangements or analysis (Porter & Goldman 2013).2 Fifth, public policy processes include research (Geurts 2014), decision-making (Simon 1945; Zittoun 2009) and the public policy cycle (Lasswell 1956).

Lastly, Lasswell (1956) has proposed the seven stages of a policy cycle, that is, intelligence, recommendation, prescription, invocation, application, appraisal and termination. Thereafter, several authors such as Anderson (1975), Jenkins (1978), May and Wildavsky (1978) as well as Brewer and deLeon (1983) have proposed variations to this stage model. In these newer versions, the proposed technical stages include ‘issue formation’ or ‘diagnosis’, ‘formulation’, ‘implementation’ and ‘evaluation’ while the proposed political elements include ‘policy agenda setting’ and ‘policy adoption’. To tally it to practice, we reduce these stages to four after incorporating ‘agenda setting’ as part of the diagnostic stage and ‘policy adoption’ as part of the formulation stage. This adaption matches with the system the South African Presidency is using after the Planning Commission produced the Diagnostic Report in July 2011, which set part of the agenda of the African National Congress. Thereafter, the Commission produced the National Development Plan 2030 in November 2011 which the Congress adopted during their 53rd National Conference in Mangaung (Free State province) in 2012.

The diagnostic stage, presumed to be the first port of call, should address three questions: ‘what is the problem?’, ‘who are the beneficiaries and what are their needs?’ and ‘who are the stakeholders and what are their interests?’ All else being equal, a detailed problem analysis should deliver an effective development intervention because it exposes the root cause of the problem and is, therefore, likely to justify an effective remedy. Similarly, by interrogating the last two questions, we are likely to have a detailed understanding of the people whose lives the intervention will change and, therefore, we should be able to deliver a relevant and sustainable development intervention. The second stage, formulation, involves application of the theory of change, the results chain and framework as well as the logical framework to link the perceived ‘impact’ to the required ‘outcomes’ and ‘outputs’ to their corresponding ‘activities’ and ‘inputs’.

The third stage (implementation) comprises management and monitoring of ‘inputs’ and ‘activities’ to produce ‘outputs’ meant to realise the intended ‘outcomes’ and consequently enact the intended ‘impact’. This is why, much more directly, monitoring is a management tool for overseeing the use of inputs, undertaking activities and producing outputs. We should remember that these three parameters are not an end in themselves. Lastly, the evaluation stage implies stocktaking to assess if the produced outputs are leading to the outcomes meant to bring about the desired impact. This is summative evaluation bearing in mind that there are other forms of evaluation (which we discuss in the next section). In sum, the last two stages of a public policy cycle framework kick-start the description and discussion of M&E terminology. Lastly, note how the model links M&E terminology to the broader contextual terms much more systematically.

Monitoring and evaluation: Components, processes, established facts, issues and debates

Figure 2 illustrates that the components (structure and function) of M&E studies are monitoring, formative or design evaluation, process or implementation evaluation and summative evaluation. We can further divide summative evaluation into outcome and impact evaluation. Over and above these components, the five elements of the results chain – that is, impact, outcomes, outputs, activities and inputs – and those of the results framework – that is, indicators and the accompanying data sources, baseline values, target values, assumptions and risks – make up the fundamental element of M&E with varying importance depending on which of the five components is of interest or under focus.

FIGURE 2: Illustrating the relationship between monitoring and evaluation and its components and processes.

Processes in M&E include continuous data collection and storage, data processing and analysis, as well as reporting and integrating M&E results in planning, budget allocation and implementation. More specifically, Bakewell, Adams and Pratt (2003) describe some fundamental steps in data collection and processing as well as information management. More generally, Kusek and Rist (2004) as well as Görgens and Kusek (2009) provide a detailed discussion on M&E systems that can help us put this function together.

There are a number of established facts in M&E. For example, Baradei et al. (2014) point out that M&E at project level provides narrow but specific operational and performance management insights and recommendations that are not transferable from one project to another unless the projects are really similar, if not identical. However, M&E at policy and programme level provides broader insights and recommendations. Second, to conduct quality M&E activities at any level, one needs accurate and high-quality data. Third and obviously, high-quality M&E activities are a product of teamwork of individuals with varied specialisations. Fourth, M&E is a costly exercise. Lastly, M&E activities are sensitive to the cultural and political context3 (Patel 2013).

Similarly, there are several key issues and debates in M&E such as those described by Baradei et al. (2014). First, they point out that M&E is viewed as a political risk and this explains why M&E reports are not always for public consumption even in countries with surmountable political support of accountability and transparency. They propose that the electorate should always demand for transparent M&E activities and reporting. Second, they argue that institutions mandated to carry out this function on behalf of government are not always independent of the executive and, therefore, their objectivity is questionable even when they have personnel capable of undertaking robust M&E functions. Third, they observe that there is more M&E at project level compared with programme and policy level. With regard to evaluations, there are more summative evaluations followed by process evaluations and hardly any formative evaluations. This implies that the evaluation function is not as effective when designing and implementing development interventions. Further, most summative evaluations focus on outcomes rather than impacts and most of them are conducted without baseline values or a benchmark. Fourth, they point out that most evaluations use the quantitative research strategy. Further, most evaluations are desk researches rather than field surveys. These shortcomings imply that evaluations miss out on what other research strategies and empirical data and information can offer. Fifth, most M&E activities face data challenges such as accessibility, quality and accuracy. Further, despite gathering rich information, the results of studies that apply a qualitative research strategy are not generalisable. Those that apply quantitative research strategy are generalisable but do not generate in-depth and rich data compared with those that employ a qualitative research strategy. Like most research based on literature review and document analysis, these studies provide a rich theoretical interrogation of M&E. However, they lack empirical evidence to back up their findings, proposed frameworks and factors.

Sixth, Baradei et al. (2014) note that most institutions are still using traditional approaches, methods and tools in M&E instead of result-based, outcome mapping and participatory approaches and methods. Seventh, they are not sure about the explanatory or theoretical frameworks that should be used to interpret M&E results. Put differently, in what framework or environment should we base our M&E function, practice and theory? Eighth, they point out that most reporting is mid-term and at the end of the intervention, in most cases at the request of development partners at the expense of an institutionalised and periodic M&E function. Besides, reporting and communicating products of the M&E function do not often target the general public who are the ultimate beneficiaries of development interventions. Ninth, they observe that M&E activities and personnel are often disproportionately small in most organisations and, therefore, unequal to the challenge. Relatedly, most institutions lack skills and expertise to analyse M&E data and report results. Tenth, they argue that M&E requires and also results in political and economic stability. It promotes transparency and accountability in the use of public resources. However, the question is: ‘Does monitoring and evaluation actually influence the development and public policy landscape?’ More specifically, does it influence how we design and implement interventions? If so, is it at policy, programme or project level? Similarly, at what level should we consider the M&E function to be ‘anti-government’ due to its potentially critical content? Lastly, there is a limited number of participatory and collaborative approaches to M&E especially in developing countries (Chouinard & Cousins 2013). The issue is: how can we assess interventions without the participation or collaboration of their beneficiaries?

Proposed institutionalised monitoring and evaluation curriculum

Overall, had it not been for the systems methodology described in Wotela (2016), we would probably not understand the logical relationship between development, public policy, leadership and governance presented in Figure 1. Therefore, the first module on the M&E curriculum should be ‘systems thinking and methodology’. Apart from providing for the model, M&E is logical by its nature and, therefore, requires a module that should equip students with systems thinking skills. Such skills will cater for important functions in M&E such as formative evaluation as well as articulation of the theory of change, the results chain and the results framework.

Second, almost any credible definition of M&E refers to development even though not every credible development literature discusses M&E. The model provides for development, its components and processes. Therefore, we recommend that ‘development interventions’ should be the second module on the M&E curriculum. Rather than thick debates in development, the module should focus on cultural, political, economic, social and environmental development interventions. It should also provide for indicators, attributes and variables for measuring development interventions.

Third, while development interventions provide what we track in monitoring and what we assess in evaluation, there is a need to understand how interventions are brought to fruition. Public policy allows for this understanding and provides M&E with the public policy cycle. Therefore, the third module on the M&E curriculum should be ‘an introduction to public policy’ with a bias towards the public policy cycle framework to appreciate ‘how’ public sector development interventions should be operationalised. This should include decision-making as well as institutional and organisational arrangements for development interventions, including those for the M&E function. This will also allow for an understanding of performance management and results-based management. We are aware that the model provides for leadership, governance and political economy as important components of public policy. However, these could be provided in the context of public policy rather than as standalone models.

Fourth, most participating students – who by their own right are seasoned development practitioners and public administrators – felt that most M&E tuition assumes that development interventions are M&E ready. However, this is not the case and to allow for the M&E function, one has to reorganise the intervention. Therefore, the fourth module should be ‘programme and project: strategic planning’. This should cater for a detailed understanding of the planning process especially diagnostics and formulation of development interventions as articulated in Figure 1. Relatedly, the fifth module should be ‘programme and project: financing and budgeting’ which should articulate how to deal with the inputs, that is, budgeting, including performance budgeting and financing. The students felt that currently this is oversimplified, if it is included, making it impossible for them to monitor and evaluate the inputs element of development interventions.

Fifth, the students felt that skipping implementation and management or administration of monitoring leaves a void in their understanding of monitoring and more so in their knowledge of how to actually monitor a development intervention in the real world. To allow for a holistic understanding of monitoring the sixth module should be ‘programme and project implementation’ which should articulate operations management and monitoring in the public sector. The module should also cater for a detailed understanding of operationalising development interventions. More specifically, a detailed understanding of implementing development interventions as well as monitoring data management: collection and storage, processing and analysis, reporting and integration into management decision-making. As shown at the bottom of Figure 1, the emphasis in this module should be management and monitoring of inputs, activities, outputs and, to a limited extent, outcomes.

Sixth, there was a heated debate around (1) whether there is a notable difference between research and evaluation and (2) whether good researchers make good evaluators or good evaluators make good researchers. Although this debate was inconclusive, the students did not see any notable difference between research and evaluation other than that evaluation is more applied research and that good researchers make good evaluators but not vice versa. We, therefore, propose a seventh module that caters for both ‘applied research and evaluation’. This module should provide qualitative and quantitative skills needed for the formative, process and summative evaluation that was discussed in Figure 2. The module should strike a balance between rigorous research approaches and relevant research even if research and evaluation could be different. Patton (2008), in Porter and Goldman (2013), has pointed out that evaluation is different from research only because it supports developmental efforts through provision of practical and specific answers to its challenges.

Seventh, one of the established facts in M&E is the process of continuous data collection and storage, data processing and analysis, as well as reporting and integrating M&E results in planning, budget allocation and implementation. Therefore, the eighth module should be ‘data generation, analysis and management’ to cater for data, information and knowledge generation, analysis and management. This should include an understanding of either basic or advanced statistics, if not both, which is an important ingredient in the conversion of data to information and knowledge. However, emphasis should be placed across the entire spectrum of data management, that is, data collection, storage (databases), processing, analysis and reporting. The students felt that currently there is too much emphasis on statistics, not knowing that the current challenge in M&E includes mere data collection. For example, how does an official in the national Department of Basic Education collect monitoring data in all schools in the country?

Lastly, students agreed that building a monitoring and evaluation system is not child’s play and unfortunately when consultants are hired to undertake this task, it is not contextualised or does not respond to all the requirements of an organisation’s need for M&E. This gets worse if the M&E personnel have no clue about what the important ingredients of a functional M&E system are. Therefore, the last module should be ‘the monitoring and evaluation system and practice’. This module should put everything together and includes skills for developing and managing an implementation framework and its accompanying monitoring and evaluation systems for any development intervention. Contextualisation and application of the concepts should be the main focus of this module.

The first five modules should be extended to senior officials so that they understand the role of M&E in development interventions. Understanding development and its interventions and how it relates to M&E can instil a sense of how useful this function is to their primary function, which is crafting and delivering development through public policy. The foot soldiers involved in the actual data collection should be exposed to modules 6, 7 and 8 so that they do not only understand their role but also understand how this links in with the actual requirement of data they collect and process. Of course, one can break the proposed modules into more than one module and possibly add modules not reflected here. We, however, reserve the detailed specification of each module to respective institutions and possibly a future discussion.

We have derived these modules systematically from the model derived elsewhere but re-presented here. Other than applying systems methodology and a detailed literature review, the proposed syllabus had an active participation of almost 300 postgraduate M&E students. Therefore, the approach, the model and the product are academically idealised but more so practically inspired. The proposed curriculum exemplifies what civil servants do and, therefore, has the potential to improve the quality of M&E expertise and hence the quality of evidence collected and the rigour of its analysis. Further, exposure to the proposed curriculum may help with institutionalising M&E and improve on its function or its institutional and organisational arrangements. These gains will improve the practice and, therefore, move towards reaping the rewards of this function rather than undertaking it for compliance. The proposed modules are not in any way meant to make those interested in M&E specialists in development, public policy or management, but is intended to assist them to measure development interventions much more effectively.

Conclusion

In sum, the assumption is that institutionalising M&E training will enhance the profession but more so the capacity to assess development interventions (the what) and public policy (the how). Jointly, this will provide the much-needed accountability and transparency in the use of public resources. There are hanging questions about the training and professionalisation of M&E. For example, though indirectly, Baradei et al. (2014) question what a M&E practitioner should learn and know to what extent and how (short term versus long term or job-on training versus time-off training). Further, should M&E be a dedicated function or should it be part of other functions such as strategic planning, budgeting, implementing, managing and decision-making? Regardless, if a M&E practitioner is not involved in an organisation’s core functions of formulating and implementing interventions, how will they incorporate empirical and experiential lessons that the M&E function is generating? If the M&E function is sitting outside the formulation and implementation functions, can personnel charged with the latter assess M&E reports to incorporate such lessons in their functions? Overall these questions are addressing the issue of streamlining and institutionalising the M&E function so that it realises its intended aim and objectives. In a future discussion, we propose how one can navigate through the M&E forest to become an established practitioner or professional since this is a career that individuals do not explicitly choose.

Acknowledgements

This research was partly funded by the Carnegie Large Research Grant facilitated by the University of the Witwatersrand (WITS) Transformation Office. I am grateful to the WITS School of Governance (WSG) M&E students that took part in the devising seminars where we discussed earlier formats of the model we present here. I would also like to thank the WSG members of staff who attended the conversation where this approach was first presented for their helpful comments as well as the encouraging remarks from Hanlie van Dyk-Robertson and Prof. Anne Mc Lennan. Lastly, I would like to thank Moses Masuaso Nzima, Judy Kusek and Dr Laila Smith as well as the reviewers for helping us fine-tune and reconcile our argument and perfect our write-up.

Competing interests

The author declares that he has no financial or personal relationship(s) that may have inappropriately influenced him in writing this article.

References

Acemoglu, D. & Robinson, J.A., 2012, The origins of power, prosperity, and poverty: Why nations fail, Crown Publishers, New York, NY.

Anderson, J., 1975, Public policy-making, Praegger, New York, NY.

Bakewell, O., Adams, J. & Pratt, J., 2003, Sharpening the development process: A practical guide to monitoring and evaluation, International Non-Governmental Organisation Training and Research Centre (INTRAC), Oxford.

Baradei, L.E., Abdelhamid, D. & Wally, N., 2014, ‘Institutionalising and streamlining development monitoring and evaluation in post-revolutionary Egypt: A readiness primer’, African Evaluation Journal 2(1), Art. #57, 1–16.

Brewer, G. & DeLeon, P., 1983, The foundations of policy analysis, Brooks, Cole, Monterey, CA.

Burkey, S., 1993, People first: A guide to self-reliant participatory rural development, Zed, London.

Chouinard, J.A. & Cousins, B.J., 2013, ‘Participatory evaluation for development: Examining research-based knowledge from within the African context’, African Evaluation Journal 1(1), Art #43, 9 pages.

Fisher, R., 1983, ‘Negotiating power: Getting and using influence’, American Behavioural Scientist 27(2), 149–166.

Gereffi, G. & Fonda, S., 1992, ‘Regional paths of development’, Annual Review of Sociology 18(1992), 419–448. https://doi.org/10.1146/annurev.so.18.080192.002223

Geurts, T., 2014, Public policy making: The 21st century perspective, Be informed, Apeldoorn.

Gharajedaghi, J., 2006, Systems thinking: Managing chaos and complexity, a platform for designing business architecture, Elsevier Inc., Amsterdam.

Görgens, M. & Kusek, J.K., 2009, Making monitoring and evaluation systems work: A capacity development toolkit, The World Bank, Washington, DC.

Gray, M., 2006, ‘The progress of social development in South Africa’, International Journal of Social Welfare 15(Suppl. 1), 53–64. https://doi.org/10.1111/j.1468-2397.2006.00445.x

Hill, M. & Hupe, P., 2014, Implementing public policy: An introduction to the study of operational governance, Sage, London.

Jack-Akhigbe, 2013, ‘The state and development interventions in the Niger Delta Region of Nigeria’, International Journal of Humanities and Social Science 3(10), 255–263.

Jann, W. & Wegrich, K. 2007, ‘Theories of the policy cycle’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.), Handbook of public policy analysis: Theory, politics, and methods (vol. 125, pp. 43–62), CRC Press, Boca Raton, FL.

Jenkins, W.I., 1978, Policy-analysis: A political and organisational perspective. Martin Robertsen, London.

Kusek, J.Z. & Rist, R.C., 2004, Ten steps to a results-based monitoring and evaluation system, The World Bank, Washington, DC.

Lasswell, H.D., 1956, The decision process: Seven categories of functional analysis, Bureau of Governmental Research, University of Maryland Press, College Park, MD.

May, J.P. & Wildavsky, A., 1978, The policy cycle, Sage, Beverly Hills, CA.

McFerson, H.M., 1992, ‘Democracy and development in Africa’, Journal of Peace Research 29(3), 241–248. https://doi.org/10.1177/0022343392029003001

Patel, M., 2013, ‘African evaluation guidelines’, African Evaluation Journal 1(1), Art #51, 5 pages.

Patton, M.Q., 2008, Utilisation-focused evaluation, Sage, London.

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), Art. #25, 1–29.

Roseland, M., 2000, ‘Sustainable community development: Integrating environmental, economic, and social objectives’, Progress in Planning 54(2000), 73–132. https://doi.org/10.1016/S0305-9006(00)00003-9

Sachs, J.D., 2005, The end of poverty: Economic possibilities for our time, The Penguin Press, New York.

Simeon, R., 1976, ‘Studying public policy’, Canadian Journal of Political Science 9, 548–580.

Simon, H.A., 1945, Administration behavior, Macmillan, New York, NY.

Slim, H., 1995, ‘What is development?’, Development in Practice 5(2), 143–148. https://doi.org/10.1080/0961452951000157114

Summer, A. & Tribe, M., 2008, International development studies: Theories and methods in research and practice, Sage, Los Angeles, CA.

Watson, M.D., 2012, ‘The colonial gesture of development: The interpersonal as a promising site for rethinking AID to Africa’, Africa Today 59(3), 3–28. https://doi.org/10.2979/africatoday.59.3.3

Wotela, K., 2016, ‘Towards a systematic approach to reviewing literature for interpreting public and business management research results’, Electronic Journal of Business Research Methods 14(2), 83–97.

Zittoun, P., 2009, ‘The problem of time in policy change: A dia-synchronic perspective of the book-mark’, paper presented at International Political Science Association, Santiago, Chili, 12–16 July 2009.

Footnotes

1. Baradei et al. (2014) have discussed the spread (and focus) of monitoring and evaluation from the 1950s in the United States of America (USA) through Europe and finally to developing countries.

2. Though not as straightforward, but interrogation of public policy literature as well as personal correspondence with public policy specialists suggests that institutional arrangements or analysis and organisational arrangements or analysis are components of public policy.

3. One should also familiarise oneself with the ‘evaluation guidelines’ that are packaged into four broad sections: utility, feasibility, propriety and accuracy.



Crossref Citations

No related citations found.