About the Author(s)


Kieron D. Crawley Email
Centre for Learning on Evaluation and Results (CLEAR), Wits School of Governance, University of the Witwatersrand, South Africa

Citation


Crawley, K., 2017, ‘The six-sphere framework: A practical tool for assessing monitoring and evaluation systems’, African Evaluation Journal 5(1), a193. https://doi.org/10.4102/aej.v5i1.193

Original Research

The six-sphere framework: A practical tool for assessing monitoring and evaluation systems

Kieron D. Crawley

Received: 18 Nov. 2016; Accepted: 13 Feb. 2017; Published: 12 Apr. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Successful evaluation capacity development (ECD) at regional, national and institutional levels has been built on a sound understanding of the opportunities and constraints in establishing and sustaining a monitoring and evaluation system. Diagnostics are one of the tools that ECD agents can use to better understand the nature of the ECD environment. Conventional diagnostics have typically focused on issues related to technical capacity and the ‘bridging of the gap’ between evaluation supply and demand. In so doing, they risk overlooking the more subtle organisational and environmental factors that lie outside the conventional diagnostic lens.

Method: As a result of programming and dialogue carried out by the Centre for Learning on Evaluation and Results Anglophone Africa engaging with government planners, evaluators, civil society groups and voluntary organisations, the author has developed a modified diagnostic tool that extends the scope of conventional analysis.

Results: This article outlines the six-sphere framework that can be used to extend the scope of such diagnostics to include considerations of the political environment, trust and collaboration between key stakeholders and the principles and values that underpin the whole system. The framework employs a graphic device that allows the capture and organisation of structural knowledge relating to the ECD environment.

Conclusion: The article describes the framework in relation to other organisational development tools and gives some examples of how it can be used to make sense of the ECD environment. It highlights the potential of the framework to contribute to a more nuanced understanding of the ECD environment using a structured diagnostic approach and to move beyond conventional supply and demand models.

Growing demand for evaluation and evaluation capacity development in Africa

There is increasing evidence of a growing demand in Africa for evidence from monitoring and evaluation (M&E) systems (Porter & Goldman 2013). Accompanying this demand is a corresponding interest among a range of actors to lend support with evaluation capacity development (ECD) initiatives. But while interest from international development partners has been high and resources significant, ECD initiatives have yet to deliver the anticipated results (Tarsilla 2014). Indeed, there are suggestions that conventional focus short-term training needs to be replaced with more ‘contextually relevant, systemic learning’ (p. 1). Diagnostic studies are one way of determining the nature of opportunities and constraints that surround nascent M&E systems, but few established tools adequately address the range and diversity of actual and potential players who demand and supply evaluation evidence in Africa (Centre for Learning on Evaluation and Results Anglophone Africa [CLEAR AA] 2013).

Organisational development

The organisational development (OD) sector is replete with a range of assessment tools, many developed over years by respected applied behavioural scientists such as Burke and Litwin (1992), whose ‘causal model of organisational performance and change’ has guided institutional assessment and development across numerous continents. Marvin Weisbord’s (1961) six-box model and the seven-s model (Pascale & Athos 1988) were early popular frameworks that used a systems approach to organisational assessment where interrelationships between organisational components were recognised as driving overall performance. Early models however tended to focus on elements exclusively within the organisation and to ignore dynamics arising from issues of power and control, and relationships of trust between clients and beneficiaries (Reflect and Learn 2016). Subsequent models developed by management consultancy firms such as Universalia (Lausthaus 2002) have incorporated the systems approach and expanded it beyond the institution to consider dimensions encompassing organisational performance, capacity, motivation and environment. But while these frameworks constitute valuable structural maps from which OD diagnostic questions can be generated and onto which information can be positioned, few have been designed to generate insights that emerge from interlinkages between the different elements of the framework and even fewer (if any) lend themselves specifically to assessing institutional readiness in the field of ECD.

Graphic techniques for capturing structural knowledge

The application of graphic devices to the capture and presentation of structural knowledge has been an area of interest within the education sector for some years. Tony Buzan’s (1976) now ubiquitous mind map gained currency in the 1980s and 1990s as a result of its supposed mimicry of the way in which neural pathways create memory. Its potential to enhance learning and recall by using a spider diagram to map and connect ideas has ensured that the technique has remained popular (albeit with the help of software equivalents) among educators and students to the present day.

Mind mapping sits within a broader field of graphic techniques that attempt to represent knowledge that include spider mapping, pattern notes, concept maps, schematising and others (Beissner 1994; Eppler 2006). Each of the techniques seeks to represent knowledge not just in terms of its explicit value but also in a manner that fosters the graphic reconstruction of knowledge with a view to generating new insights (Novak & Gowin 1984).

Structural knowledge is defined by Beissner (1994:4) as ‘knowledge that represents the relationships between concepts in a content domain’. If knowledge is considered to infer meaning then according to Mandler (1983:4), ‘meaning does not exist until some structure, or organization, is achieved’.

But what are the benefits of imparting structure to knowledge within a particular field? Beissner (1994) argues that the imparting of structure to knowledge facilitates the formation of mental constructs that enhance understanding and allow it to be applied to new situations. In fact, the application of structure becomes more valuable as the content domain increases in complexity and abstraction. Jonassen (1993) additionally points out that structured knowledge is essential to domain-specific problem-solving.

Graphic techniques to structure knowledge that have emerged over the last 40 years are wide and various; however, they may be categorised into one of two recognisable groupings (Eppler 2006): node-linking mapping methods that connect discretely defined concepts to illustrate relationships (these include cognitive mapping, mind mapping, semantic networks and process event chains) and structural maps that present an overall framework on which information can be positioned (Venn diagrams, radar charts, impact wheels). These are often presented in the form of a conceptual diagram, a pictorial metaphor or a combination of both (problem trees, fishbone analysis, ‘bridging the gap’) (Eppler 2006).

So once the representations of structural knowledge have been constructed, what is their added value? Beissner (1994) highlights the importance of analysis (the breaking down of ideas into constituent parts), elaboration (relating new knowledge to what is already known) and, importantly, integration (forming associations between ideas from within a new content domain).

While the interest in graphic techniques to represent structured knowledge has been driven principally by an interest to enhance understanding among learners, there is potential for the use of this technique to construct knowledge around the complex dynamics that exist within the environment for ECD.

Conducting M&E readiness assessments – The work of Kusek and Rist

Readiness for M&E in conventional contexts is typically assessed using well-established and documented diagnostic tools (Kusek & Rist 2004). In their book Ten Steps to a Results Based Monitoring and Evaluation System, Kusek and Rist (2004) define a series of guiding questions that are intended to guide an M&E readiness assessment. These are designed to identify issues associated with technical capacity of individuals within the system, the capacity of the system itself to generate and supply information and linkages of the system to key decision-making processes. Further questions are designed to uncover potential sources of pressure that might drive demand for evaluation evidence and the identification of champions that might help support the cause. In short, the readiness assessment questions focus on understanding M&E technical capacity gaps and the potential to identify and influence political champions who can ease the way for introduction of M&E systems. A graphical representation of the trajectory of such readiness assessments might be constructed as in Figure 1. A typical theory of change for ECD that follows might be summarised as:

Potential users of evaluation come to recognise that they can affect policy processes to their benefit through using evaluation and create demand. If managers and conductors of evaluation have the capacity, political understanding and funds, then they respond to the demand from users. If commissioning and use of evaluation becomes widespread, then virtuous cycles of evaluation capacity development take place, leading to more institutionalised evidence-based practice. (CLEAR AA 2013:10)

FIGURE 1: Conventional approach to M&E readiness assessment.

On reflection, the limitation of both the diagnostic and the ECD theory of change described above is that is assumes that:

  • There are sufficient surplus resources available within the institution or state to build M&E systems without compromising much-needed frontline services.
  • States (and institutions) have adequately established democratic systems that have within them political space for M&E champions.
  • Within these political spaces there is room for champions to challenge vested interests.
  • There is sufficient trust and collaboration between institutions, government and key ECD stakeholders.
  • All stakeholders hold a common vision for development (to which the use of evaluation as evidence of performance is seen to contribute).

While these may be reasonable assumptions in the majority of countries that are members of the Organisation for Economic Cooperation and Development (OECD), we might legitimately question the degree to which they hold across the diverse range of institutions and states that make up the African continent. Clearly, a more nuanced diagnostic framework is needed to guide us through the complexities that are the reality within non-OECD states.

The six-sphere framework

Given the diversity of ECD contexts in which CLEAR AA is engaged within Africa, the six-sphere framework was developed to both encompass conventional diagnostic and readiness tools and extend their scope to include considerations of the political environment, trust and collaboration between key stakeholders and the principles and values that underpin the ECD system. The fact that it is a visual tool draws upon the tradition of graphic devices to capture and present structural knowledge (Beissner 1994). The framework is illustrated in Figure 2.

FIGURE 2: The six-sphere framework.

The framework is arranged as a hierarchy of thematic spheres that move progressively (from surface to core) through logistical, technical, contextual, relational, political and ideological domains. The presentation of spheres as ‘nested’ is intended to convey:

  • The explicit and immediate nature of factors that fall into the external logistical and technical spheres.
  • The tacit and ‘deep-rooted’ nature of factors that fall into the central political and ideological spheres.
  • A hierarchy of influence that flows from core to periphery.

While inner spheres are placed ‘internal’ to those at the periphery, the graphic is not intended to imply a relationship in terms of ‘sets’ and ‘sub-sets’ such as one would expect in a Venn diagram. The nature of each domain and how it relates to the ECD landscape is described below.

Logistical sphere – Resources, time and money

The logistical sphere encompasses considerations related to time and resources (available or not available) to generate monitoring and evaluation data and to move it along the demand and supply chain. On the supply side this includes a consideration of time and resources made available to evaluators to generate evaluations; on the demand side, time and resources made available to decision-makers who identify and use evaluations. The logistical sphere also encompasses a consideration of factors that impact the timeliness of information flowing through the system. In terms of ECD interventions the logistical sphere is the realm in which agents leverage funds, set timetables and schedules for change, promote access to information and raise awareness.

Technical sphere – Technical capacity, tools and systems

The logistical sphere encompasses considerations related to the evaluation technical capacity of individuals and departments both on the demand and supply side. On the supply side this may relate directly to the capacity of evaluators within the system in question and on the demand side, the capacity of decision-makers to interpret and use evaluation evidence.

Linked to the technical capacity of agents within the system is a consideration of the quality of M&E products that flow within it. Naturally, not all evaluation evidence is of equal quality; South Africa’s Department of Planning, Monitoring and Evaluation (DPME) has created a repository of all evaluations of government programmes carried out since 2006, which, in addition to making available a range of 83 recent evaluations, grades them in accordance with a specified quality assurance tool (DPME 2014). Uganda (GOU 2016) and Benin (Porter & Goldman 2013) have also made significant progress in terms of collecting and collating government evaluations but are yet to establish a system for assessing their quality.

Further to a consideration of the technical capacity of agents and the quality of evaluation products is a consideration of the tools and systems that channel and transmit M&E information. Within the technical sphere, we are prompted to ask about the extent to which systems are uniformly and consistently established, their depth and breadth across government departments and the extent to which they are fit for purpose. In terms of ECD interventions, the technical sphere is the realm of curriculum development, capacity building and system development.

The contextual sphere – Structures, linkages, networks and culture

The contextual sphere encompasses considerations related to structures, linkages, networks and culture. These are described in a little more detail below.

Structure

At an institutional level, considerations of structure encompass the organisational arrangements and hierarchies within the institution that facilitate the production and use of M&E data. This may include reflection on the positioning of M&E units within the institution in question or at a broader level the positioning of M&E coordinating units or departments within the executive. Given that the positioning of M&E units or coordinating departments is on the one hand important in determining their perceived independence and objectivity, and on the other their clout and influence (Kusek & Rist 2004), a consideration of structure is hugely important.

Linkages and networks

Aligned with a consideration of structure is a consideration of linkages and networks. Institutional arrangements between the coordinating bodies for M&E and line ministries or departments is a key factor in determining relationships between them and subsequently the ease with which evaluation evidence can flow. So too are linkages between government departments as users of evaluation evidence and the agencies that produce the data that feed them. A consideration of the linkages between government and civil society as a potential catalyst for demand for evaluation evidence is also encompassed within the contextual sphere, as is the extent to which media (including social media) connects citizens with government.

Culture

The contextual sphere also encompasses what is often referred to as the ‘culture of evaluation’ (Gorgens & Kusek 2010). While the forces that help forge a culture of evaluation arguably emanate from the relational and political spheres (which follow), it is in the contextual sphere that evaluation culture takes root and shapes the way institutions connect with each other. Within this sphere, we might reflect on the extent to which evaluation culture embraces openness to learn from both strong and poor performance as well as the extent to which there is an established culture of holding those responsible for poor performance to account.

While structure, linkages and culture make up the contextual sphere, at a deeper level the contextual sphere is where incentives and disincentives may reside for the use of evaluations emerging from the political economy.

The relational sphere – trust, commitment and collaboration among key stakeholders

The relational sphere encompasses considerations related to levels of trust and collaboration between evaluation system stakeholders. If evaluation evidence is to flow efficiently throughout the system from producer to user, then it is important that there is collaboration and trust not only between producers and users but also between those who act as transmission points or gatekeepers along the chain. A lack of trust among stakeholders is likely to lead to evaluation evidence that is diverted, delayed or even discarded within the system. A chronic shortage of trust between stakeholders may lead to environments that are hostile towards the use of evidence and lead to perceptions that evaluations are tools designed to punish poor performance. It is interesting to note that while trust and collaboration by nature take time to form, they can rapidly be eroded.

The political sphere – leadership and vision for change

The political sphere encompasses considerations around the extent to which leadership and vision for change exist at critical parts of the system. If a government-wide system is being considered then leadership within the higher levels of the executive and even the presidency may be necessary (as well as support within political parties). If an institution-wide system is being considered then consideration of the political sphere encompasses leadership among senior managers and directors. It is worth noting that while political leadership and vision may be necessary to support constructive change in terms of M&E, the establishing of such systems may run counter to significant political elements that may in turn challenge or even counter such efforts.

The ideological sphere – values and principles

The ideological sphere encompasses considerations related to the extent to which the use of evaluation as evidence resonates with core principles and values at the heart of the institution or, at a wider level, the ideological underpinnings of the state. This might be reflected for example in the extent to which there is a demonstrated commitment to principles of transparency and accountability or deeply held values underpinning equitable development. In some cases, these underlying principles may not be explicit and might require some work to unearth them. This in itself may be regarded as an indication of the state of health of the ideological sphere. In others, the underlying principles may run counter to the use of evidence together with the transparency and accountability that comes ‘bundled’ with it.

Integration with other models

As is apparent, readiness assessments such as the format outlined by Kusek and Rist (2004) can be accommodated within the six-sphere model, principally occupying as it does the technical and political spheres. Interestingly the six-sphere model moves beyond the scope of conventional models by encompassing relational (trust and collaboration) and ideological (principles and values) considerations. In terms of Eppler’s (2006) categorisation of graphic tools, the six-sphere model would appear to incorporate aspects of ‘conceptual mapping’ (p. 202) that illustrate the relationships between concepts, ‘node linking mapping-methods’ (p. 204) that depict process flows and dependencies, and non-node linking maps that use ‘an overall structure to map or position information meaningfully’ (p. 204).

A hierarchical arrangement

In addition to the broadened scope of factors that are included for consideration, the proposed framework also offers some insights that emerge from the hierarchical arrangement of its dimensions. In this sense, the framework would also seem to qualify as a ‘theoretical thinking tool’ (Jerneck 2014:15). Copestake (in Jerneck 2014:16) claims that such ‘mental models exist as a form of bounded rationality or deliberate simplification precisely to facilitate action in the face of potentially overwhelming system complexity’ (p. 589).

At the core is an ideological belief. If this belief incorporates the principles of equitable development, transparency and accountability then in terms of establishing an M&E system, each subsequent level (being contingent on the previous) leads to a virtuous cascade, that is, if there is a ‘belief in equitable development and the value of accountability and transparency’ then there will likely be ‘political vision for M&E’; if there is ‘political vision for M&E’ there will likely be ‘trust and collaboration among M&E stakeholders’, if there is ‘trust and collaboration among M&E stakeholders’ then there will likely be interest in ‘building technical capacity’ and so on. Of course, if principles of transparency and accountability are not enshrined at the core of the system the opposite dynamic may be evident.

Levels of complexity in terms of what is needed to leverage change increase as we move inwards. At the periphery, we can think of leveraging resources and providing technical expertise as a means of tackling ECD logistical and technical challenges. Towards the centre however it becomes less clear as to how we shift political will and alter core beliefs, particularly if these do not resonate with values that underpin the use of evaluation as evidence. This suggests that developmental change takes place slowly at the centre (ideology and political vision) but may take place more rapidly as we move outwards (technical capacity and resources).

Surface domains (logistical and technical) are relatively straightforward to address through conventional interventions (e.g. allocating resources and capacity building). However, as we move inwards through contextual, social, political and ideological domains these become increasingly challenging to influence. At the core (ideological) we may have no direct influence over change at all; ECD interventions that seek to change institutions or states at these levels may be challenged themselves.

A further reflection on the framework suggests that the periphery (the logistical and technical spheres) is where evidence is generated, but that the centre (ideological, political and relational spheres) is key in determining whether evaluation evidence is used. In moving from the periphery to the centre M&E information is transformed progressively from data through to evidence and ultimately judgement. In a sense, the inner layers are essentially where the political economy operates while the outer layers are where evaluators operate.

Considering the framework as outlined in Figure 2, it becomes clear that while many states may have already achieved ‘readiness’ across the inner domains (ideological, political and contextual) as part of their journey of development or democratisation, many other have yet to do so. It is not surprising therefore that ECD in these contexts tends to focus on the outer layers (arrow A).

Typical interventions here focus on allocation of resources, technical capacity building and developing the institutional context (Tarsilla 2014). Less developed states and institutions may have variable readiness at all levels (or in some cases none at all). This suggests that for these entities a much deeper intervention is required to bring about ‘readiness’ across the range of stakeholders for an M&E system.

The framework also suggests that while ECD in developmental states may have as a starting point to allocate resources and build M&E technical capacity, unless there is accompanying support from the deeper levels, they are unlikely to be successful.

Linkages between the six sphere levels

One of the attributes of the six-sphere framework as a visual tool is that the hierarchical arrangement of layers can generate insights through exploring relationships between ideas that are situated on different levels. Some examples of this process of integration (Beissner 1994) are listed here.

Using the framework as a model, evidence can be thought of as being produced at the periphery whereas demand to use it emanates for the centre. As evidence is drawn from the periphery it is translated from raw data through the filters of institutional culture, political viewpoints and finally deeply held principles and beliefs before it is used to make judgement.

The extent to which an evaluation will be regarded as professional indicates a requirement to be both competent and objective (technical and contextual spheres). The extent to which it will be welcome may be additionally shaped by how easily it is aligned with political agendas. To be welcomed (and used) it therefore must resonate with both political agendas and ideological standpoints at the centre of the system (political and ideological spheres).

This visual analogy would seem to resonate in a very real sense with our experience that even though evaluations may be technically sound, the extent to which they spotlight good or bad performance inevitably determines the reception they have among senior decision-makers, policymakers and politicians. This journey across the relational and political spheres, where trust and collaboration between stakeholders and alignment with political agendas are major factors, means that the journey may sometimes be smooth but is more likely than not to be turbulent.

Using the six-sphere framework as a diagnostic tool

As well as providing some insights into the nature of institutions and how they might be expected to react to moves to systematise M&E, the framework also suggests a series of guiding questions that can be used within an M&E diagnostic. A selection of these is reproduced in Table 1 (Crawley 2014).

TABLE 1: Diagnostic questions generated using the six-sphere framework.

Used as an initial round of questions the framework has been used within CLEAR AA as a valuable structure from which to drill down into deeper aspects of ECD. An analysis of findings allows the researcher to determine where within the institution the major opportunities lie to strengthen the organisation across the six sphere dimensions, where major constraints and blockages exist, where issues are clustered, to what extent these are explicit (logistical and technical) or implicit (relational, political and ideological) and how the factors identified are interrelated.

Using the six-sphere framework to map out and make sense of what we are hearing

Conversations and dialogues around ECD serve to highlight the contextual diversity and complexity of the sector. At recent events organised by CLEAR AA in Cotonou and Johannesburg in 2015, delegates from across 11 national governments, from both Anglophone and Francophone countries, identified variously resource allocation, knowledge production and communication, institutionalisation of evaluation, the policy environment, leadership, public sector attitudes, alignment with national plans and priorities, the engendering of a culture of evaluation, technical capacity building and the identification and supporting of champions as being among a range of focal areas that require attention if evaluation is to become sustainably rooted within state governance systems (CLEAR AA 2015). The list above represents a comprehensive list of areas that might potentially serve to guide a wide range of ECD interventions. The reality is however that these focal areas do not sit discretely and independent of each other. In fact, they are implicitly and sometimes intricately linked to each other, with causality flowing in multiple directions.

As an illustration, S. Gariba, interviewed by K. Crawley, 12 October 2015, discusses the challenges of leveraging greater budget allocation to evaluative processes within government departments. Budgets are under the control of senior government policymakers who, if sufficiently convinced of the merits of using evaluation as evidence for decision-making, will likely advocate for allocation of resources accordingly. He proposes that the reluctance of such policymakers to advocate for greater use of evaluation may not necessarily stem from the fact that they do not perceive it as useful, but rather that they see evaluation as a highly technical field about which they know very little. As Gariba points out, the potential for a senior member of government to embarrass themselves by failing to convincingly elucidate on an approach that they are championing may outweigh wider potential benefits to the system. Yet senior policymakers are not normally among those who see fit to attend technical training courses. One solution to this hypothetical conundrum has been DPME’s initiative to provide technical orientation for senior policymakers using ECD training with the intention not to make them expert evaluators, but rather to provide them with sufficient knowledge to allow them to act confidently as champions for others who will produce and use evaluation evidence.1 Once ‘converted’, policymakers are more likely to influence budget allocation that may be used not only for evaluative processes, but also for expanded technical capacity building. Charting this path on the six-sphere framework demonstrates that an intervention within the technical sphere (capacity building training) has potential to influence change within the political sphere (leadership and vision) that in turn can leverage positive change within the logistical sphere (allocation of resources). A mapping of the process above onto the six-sphere framework is shown in Figure 3.

FIGURE 3: The six-sphere framework in use.

While the narrative of the story above is fascinating, the challenge for proponents of ECD is to structure understanding and insights that emerge from such narratives in a manner that allows a coherent strategy to be constructed. Using the six-sphere framework to structurally reconstruct knowledge is potentially a powerful way to make sense of what we are hearing and to identify pathways for change in future.

Conclusions

Factors supporting or constraining the use of M&E evidence in decision-making are wide and various. Conventional approaches to ECD tend to focus on allocation of resources, building technical capacity and identifying champions to help ‘bridge the gap’ (ed. Segone 2008:8) between supply of and demand for evaluations. The use of visual frameworks such as the six-sphere model as a basis for diagnostic study draws out both explicit and implicit factors that influence the uptake of M&E evidence and the extent to which institutions are ready for M&E systems.

The tool has been shown to highlight linkages between factors across dimensional levels and to help to make sense of what is otherwise a complex tangle of issues. Participative use of the tool allows the uncovering and exploration of issues that may otherwise remain hidden.

Acknowledgements

The article outlines a framework that may be used as both a diagnostic tool to build understanding of the health of monitoring and evaluation systems and a framework to design evaluation capacity development interventions. The tool has been developed by the author and used to inform the work of the Centre for Learning on Evaluation and Results Anglophone Africa.

Competing interests

The author declares that he has no financial or personal relationship(s) that may have inappropriately influenced him in writing this article.

References

Beissner, K.L., 1994, ‘Using and selecting graphic techniques to acquire structural knowledge’, Performance Improvement Quarterly 7(4), 20–38. https://doi.org/10.1111/j.1937-8327.1994.tb00648.x

Burke, W. & Litwin, G., 1992, ‘A causal model of organisational performance and change’, Journal of Management 18(3), 523–545. https://doi.org/10.1177/014920639201800306

Buzan, T., 1976, Use your head, BBC Publications, London.

CLEAR AA, 2013, Study on the demand and supply of evaluation/evaluative research in selected sub-Saharan African countries, University of the Witwatersrand, Johannesburg.

CLEAR AA, 2015, Anglophone African dialogue: Drivers of demand for evaluation, Conference Report, CLEAR AA, Johannesburg.

Crawley, K.D., 2014, Political economy as a critical factor in shaping approaches to M&E capacity building interventions, Africa Evidence Network Colloquium, AEN, Johannesburg.

Department of Planning, Monitoring and Evaluation (DPME), 2014, Standards for evaluation in government, DPME, Pretoria.

Eppler, M., 2006, ‘A comparison between concept maps, mind maps, conceptual diagrams, and visual metaphors as complementary tools for knowledge construction and sharing’, Information Visualisation 5, 202–210.

Gorgens, M. & Kusek, J., 2010, Making monitoring and evaluation systems work, The World Bank, Washington, DC.

GOU, 2016, Evaluations, Government Evaluation Facility, viewed 18 November 2016, from http://gef.opm.go.ug

Jerneck, A., 2014, ‘Searching for a mobilization narrative on climate change’, The Journal of Environment & Development 23(1), 15–40. https://doi.org/10.1177/1070496513507259

Jonassen, D.B., 1993, Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge, Psychology Press, Hove.

Kusek, J. & Rist, R., 2004, Ten steps to a results based monitoring and evaluation system, The World Bank, Washington, DC.

Lausthaus, C., 2002, Organisational assessment: A framework for improving performance, IDB, IDRC, Washington, DC.

Mandler, J., 1983, Stories: The function of structure, paper presented at the annual convention of the American Psychological Association, Anaheim, CA, 26–30 August.

Novak, J.D. & Gowin, D.B., 1984, Learning how to learn, Cambridge University Press, Cambridge.

Pascale, T. & Athos, A., 1988, The art of Japanese management, Penguin, London.

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, Africa Evaluation Journal 25, 9. https://doi.org/10.4102/aej.v1i1.25

Segone, M. (ed.), 2008, ‘Bridging the gap – The role of evaluation in evidence based policy making’, UNICEF, Geneva, Switzerland.

Tarsilla, M., 2014, ‘Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward’, African Evaluation Journal 2(1), 13 pages. https://doi.org/10.4102/aej.v2i1.89

Weisbord, M., 1961, ‘Organizational diagnosis: Six places to look for trouble with or without a theory’, Group and Organisation Studies 1(4), 430–447. https://doi.org/10.1177/105960117600100405

Footnote

1. UCT in collaboration with DPME runs regular executive courses in evidence-based policy making and implementation for senior policymakers.


 

Crossref Citations

1. Introducing digitalisation to strengthen evaluation systems for democracy in African parliaments
M. Kgothatso Semela, Tefo Mosienyane, Aisha J. Ali, Caitlin A. Blaser Mapitsa
Politikon  vol: 50  issue: 1  first page: 61  year: 2023  
doi: 10.1080/02589346.2023.2172531