About the Author(s)


Eureta Rosenberg Email symbol
Environmental Learning Research Centre, Faculty of Education, Rhodes University, Grahamstown, South Africa

Karen Kotschy symbol
Environmental Learning Research Centre, Faculty of Education, Rhodes University, Grahamstown, South Africa

Citation


Rosenberg, E. & Kotschy, K., 2020, ‘Monitoring and evaluation in a changing world: A Southern African perspective on the skills needed for a new approach’, African Evaluation Journal 8(1), a472. https://doi.org/10.4102/aej.v8i1.472

Note: Special Collection: SAMEA 7th Biennial Conference 2019.

Original Research

Monitoring and evaluation in a changing world: A Southern African perspective on the skills needed for a new approach

Eureta Rosenberg, Karen Kotschy

Received: 21 Feb. 2020; Accepted: 22 June 2020; Published: 23 Oct. 2020

Copyright: © 2020. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: As science and modern technology have brought many advances, we have also come to overshoot planetary boundaries, while still falling short of development goals to eradicate poverty and inequality. A growing recognition of the complexity of development problems and contexts calls for new framings, including a new approach to monitoring and evaluation (M&E) as one of the mechanisms by which modern societies aim to steer towards a more sustainable future. New approaches to M&E mean new skills for the M&E practitioner.

Objectives: This article proposed a framing for M&E skills, comprising of technical, relational and transformational (T-R-T) competences.

Method: Adapted from the literature, this competence framework was tested in a broader learning needs assessment and then applied retrospectively to author’s experience in developmental evaluations in complex social–ecological contexts in southern Africa.

Results: The emerging insights were that not only technical competence is needed, but also relational competence that goes beyond interpersonal skills, to enable the production and uptake of evaluation findings. In addition, the limitations of mainstream M&E methods in the face of complexity seemed to create a need for ‘transformational’ competence, which included evaluators’ ability to develop credible M&E alternatives.

Conclusion: The T-R-T framework helped to advance the notions of ‘hard’ and ‘soft’ skills and expanded on existing M&E competence frameworks. Recommendations included a call for innovative educational and professional development approaches to develop relational and transformational competencies, in addition to training for technical competence.

Keywords: Evaluation; Skills; Competencies; Technical; Relational; Transformational; Training.

Introduction

Background

What is the need for another article about evaluator competence? In a special issue of the Canadian Journal of Program Evaluation dedicated to evaluator competence and professionalisation, Podems (2014) quoted Joint United Nations Programme on HIV/AIDS (UNAIDS) to say that evaluator competencies are products of their time; as evaluation thinking evolves and social and political contexts change, new perspectives emerge on the skills we need for evaluation. This article looks at the implications for evaluator competence of complex social–ecological contexts that formed part of the changing world theme in the 2019 conference of the South African Monitoring and Evaluation Association (SAMEA). The growing awareness of climate change, development failures resulting in global poverty and unsafe migrations as well as pandemics like COVID-19 are creating new contexts in which a new look at monitoring and evaluation (M&E) and evaluator competence is required.

However, pronouncing on what competencies evaluators need is a contested topic (Podems, Goldman & Jacobs 2014:77). Producing competence ‘lists’ can serve the purposes of accreditation and evaluator selection and can result in gatekeeping, which could in turn eliminate innovative approaches and entrench a status quo, when change may rather be required. This article argues for a competence framework that can avoid and even overcome a reifying and narrowing bureaucratisation of evaluation standards. It is written with a view to informing the education and professional development of evaluation practitioners. Specifically, it speaks to the kinds of competencies evaluators need today when they work developmentally in complex contexts, necessitating the need for complexity-sensitive development approaches (Britt 2016), which in turn affect the kinds of M&E practices that can match such contexts. The article provides a framing for thinking about the competencies that evaluators need in complex contexts and identifies and discusses examples of such competencies, based on a retrospective analysis of the authors’ research and field experience as researchers, evaluators, mentors and educators.

The article starts by providing a perspective on the ‘changing world’ of the SAMEA 2019 conference theme. Drawing among others on the work of Snowden (Kurtz & Snowden 2003), Rogers (2008), Patton (2010) and Funnell and Rogers (2011), an argument is made that the framing of development challenges as complex matters, requiring new forms of development, also calls for new approaches to M&E. What skills or competencies do evaluators need in these circumstances, and how should evaluation educators teach for such competencies?

No study was conducted to investigate the above problem. Rather, the authors drew on the literature and conducted a retrospective review of their own broader research, and of participatory observations during field experience, to analyse the competencies evaluators (M&E practitioners) need today. This research and the contexts of observation are briefly described before the findings are shared and discussed. The article concludes with the implications for evaluation education and professional development, and a statement on its limitations.

Problem statement

The world is changing. In southern Africa, it is getting hotter and drier; storms are more frequent and more intense; and disease and poverty burdens are rising when food security is increasingly at risk. At the same time, our understanding of the world is changing. Growing numbers of experts and ordinary citizens are concerned about the ability of mainstream development and economic growth models to eradicate poverty and inequality. In his seminal work on reflexivity, Risk Society, sociologist Ulrich Beck (1992) argues that as a global society, we have become more reflexive and as a result we see the shadow sides of modernity, including the negative aspects of uncritical scientific and economic trajectories. In the climate change protests of our time, young people are calling for a systems change, reflecting awareness of the systemic reasons why human activities are overshooting the boundaries of the planet Earth (Figure 1 shows a range of environmental risk levels determined by earth scientists). At the same time, service delivery protests and economic migrations, in Africa and globally, demonstrate that despite significant development efforts and progress, global development goals remain unmet, that is, we fail to meet the basic needs of all the planet’s citizens. These issues are exacerbated and vividly illustrated by dramatic health concerns like the COVID-19 pandemic (among others).

FIGURE 1: Human activity exceeding planetary boundaries.

Many argue that a new approach to development is needed, one that will be ecologically sustainable, economically more inclusive and socially just (see e.g. Raworth 2017). And as society deliberates what such new development approaches may look like, it has also identified the need for new approaches to M&E as one of the fundamental mechanisms in modern societies to help steer development (O’Donoghue 1986; Rosenberg 2019).

With its emphasis on credible evidence through systematic reasoning and technique, M&E is a social science, and Beck (1992) noted the shadow side of using science uncritically. The evaluation community has been critical, and its reflexivity is reflected in calls for participatory evaluations, critiques of the ‘fourth generation’ of constructivist approaches and the emergence of what some termed a ‘fifth generation’ of theory-based evaluations (Brouselle & Buregeya 2018), to strengthen evaluators’ ability to explain outcomes and therefore better support learning in and across interventions (Rosenberg 2017). Yet most evaluation students learn for the most part the basics of the ‘gold standard’ of evaluations for simple and complicated contexts (Funnell & Rogers 2011). As practitioners in the field increasingly encounter complex challenges and complexity-sensitive programmes designed to respond to these, they are uncertainly working out alternatives to the frames and techniques they have studied (Colvin, Rosenberg & Burt 2017; Rosenberg et al. 2015). Responses to interactive conference sessions suggest that their training has not adequately prepared them for the complexity they encounter in the field.

The kinds of contexts and programmes where evaluators work are increasingly recognised as complex, rather than simple or complicated (Funnell & Rogers 2011; Patton 2010). Complex contexts feature multiple interacting variables in dynamic and open systems, resulting in high levels of uncertainty about the consequences, intended and otherwise, of interventions. In these circumstances programme implementers seldom have a blueprint to follow. This means that development agencies and governments have an enormous need to learn from practice and from the outcomes of development initiatives, to find solutions for these complex issues. One critical source of such learning should be M&E processes that track outcomes and cumulative impacts of development interventions. However, Smith, Pophiwa and Tirivanhu (2019) argue that M&E in Africa has been heavily influenced by a technocratic approach that puts the focus on accountability – tracking expenditure, activities and outputs – and neglects to support ongoing and cumulative learning.

Monitoring and evaluation for learning purposes requires a different approach to the standard procedures for tracking accountability. Drawing on theory-based and realist evaluation approaches, Pawson and Tilley (1997) probed ‘what works for whom and why’ to allow for explanatory evaluation and therefore learning, within and across cases. But how do evaluators take into account complexity (well contrasted with simple and complicated contexts by Snowden; see Kurtz & Snowden 2003)? In a USAID publication Britt (2016) called for complexity-sensitive evaluations, although Patton (2010) gave prominence to developmental evaluation as a response to complexity. Funnell and Rogers (2011) provided examples of non-linear theories of change and log frames suitable for complex contexts.

During the multi-year implementation of the RESILIM-O programme by the Association for Water and Rural Development (AWARD) to address resilience to climate change in the Olifants River, evaluators and programme staff collaborated on designing and implementing a complexity-aware M&E system to optimise learning. A hybrid M&E model was implemented by using standard tools like indicators, log frames and reports, in non-standard ways. Here it was noted that the appointed M&E officers were trained to be proficient in technical skills to track expenditure, activities and outputs in linear log frame-based spreadsheets and reports. However, they found it challenging to design and implement complementary or alternative methods to support programme learning in a complex context. Thus the question arose: What competencies do evaluators need in these new contexts? The question was also pertinent in the Tsitsa River Project (formerly NLEIP), a transdisciplinary land restoration and livelihoods initiative in the Eastern Cape of South Africa implemented by Rhodes University with the Department of Environment, Forestry and Fisheries (DEFF) (Botha, Kotschy & Rosenberg 2017).

This article is an attempt to better understand the competency needs that became evident in these contexts, to conceptualise suitable training and professional development programmes in a university context. It is based on a retrospective analysis of observations made in these and other programmes (see Table 1) against a framework adapted from the literature and applied in a national learning needs assessment, which is described next.

TABLE 1: Findings on competencies evaluators need, as observed in the field.

Methods

A study which has provided a useful framing for graduate-level competency needs is a learning needs assessment for the green economy policy-practice context in South Africa (PAGE 2016) conducted for Partnership for Action on the Green Economy (PAGE) under the auspices of the United Nations Institute for Training and Research (UNITAR), the International Labour Organisation (ILO), DEFF (then Department of Environment Affairs) and others. The assessment determined what skills policy practitioners need to advance a green economy, and what educational interventions would be suitable to produce such skills. This was the second such assessment undertaken through PAGE, the first being in Ghana. The South African study featured a mixed methods approach consisting of focus group discussions with experts and stakeholders; online questionnaires; document analysis for a policy review; mini-case studies based on document analysis and telephonic interviews; an education provider audit; and face-to-face in-depth interviews with practitioners and educators. It resulted in a range of competencies identified and clustered (PAGE 2016).

Two outcomes of the learning needs assessment are relevant to the current article. The first is the value of the conceptual framework that was used to determine learning needs. This framework conceptualised learning needs as ‘competencies’ (as the plural to competence), which was taken to mean a complex of intertwined knowledge, values/dispositions and application skills. This framing was in turn adapted from competency research in the sustainability sciences (Wiek, Withycombe & Redman 2011). The range of competencies described by Wiek et al. was synthesised by using the framework of Scharmer (2009), who identified that leaders in challenging contexts need a combination of technical, relational and transformational knowledge. In the Green Economy Learning Assessment (GELA) Scharmer’s framework was re-articulated as technical, relational and transformational competence, to denote the need for a combination of knowledge, skills and values, and then tested with experts and stakeholders (Figure 2).

FIGURE 2: The technical, relational and transformational competences conceptual framework for analysing learning needs among policy practitioners.

The feedback from respondents in the GELA was that the framework was challenging to use as it required much careful thought, but that it reflected well the range of learning needs these experts have been encountering in the field. The technical–relational–transformational (T-R-T) framework was then used to survey learning needs through online questionnaires and interviews. Respondents were asked: What technical competencies do policy-practitioners need in your field? What relational competencies? What transformational competencies? Definitions were provided and examples given. The study yielded a wealth of information which was used to map out clusters of learning needs and produce the GELA report (PAGE 2016). The researchers went on to use it to reflect on educational programmes in university contexts (Rosenberg, Lotz-Sisitka & Ramsarup 2018).

In this article, the same T-R-T framework is used to reflect on learning needs observed in the field of evaluation practice in which the authors work. Before this field is discussed, it is important to consider whether this framework, developed in a green economy policy-practice context, is relevant for the field of M&E. This article proposes that it is, based on the following. Firstly, the field and practice of M&E is contending with a changing world, characterised by complex social–ecological challenges noted earlier (and described, for example, by Patton 2010 and Funnell & Rogers 2011). A framing emerging out of the sustainability sciences (spanning multi- and transdisciplinary contexts) could therefore be appropriate to be further tested for its relevance. Secondly (and this is the second GELA finding relevant to this article), the GELA identified as one of the learning needs for the green economy context, the need for evaluation competence, further qualified as reflexive, transdisciplinary and supporting social learning (PAGE 2016). The finding suggests that there is a basis for applying the framework to try and describe what expertise evaluation practitioners need, even if only in this field (see the section on Limitations of the study).

The next part of the article describes the sectors or field of practice in which the participatory observations had been made, and to which the T-R-T framework was applied. Monitoring and evaluation takes place in a variety of contexts: basic education, higher education, public health, and more. In this case, the field of practice can broadly be described as environment and society. This field is multi- and transdisciplinary in nature, and draws on complex systems thinking, social and organisational learning, adaptive management and ecological economics, among other theoretical trajectories. It involves social, ecological, organisational and educational sciences, and the confluences between them. Examples are climate change adaptation and resilience programmes, social–ecological landscape and river catchment programmes with social learning components, organisational learning in conservation agencies and protected areas, and education for sustainable development (ESD) and capacity-building. Despite their diversity, these are all contexts in which societal change, social learning and new practices are key objectives, underpinned by concerns about economic and broader social justice in the face of development failures, climate change and sustainability issues in general. Although such programmes are rolling out across the globe, the projects reviewed are from Southern Africa. Funding is typically from national government, international donors or corporates. The approaches to M&E in the reviewed projects varied, but most were developmental in nature (sensu Patton 2010), mostly formative and also summative, often participatory, and involved evaluators with both internal and external roles. The M&E systems observed seldom involved only a one-off event in the life of the programme, or only external evaluators. Furthermore, the M&E approaches used typically required both M&E as an ‘interconnected system’ where ongoing monitoring provides the data for periodic evaluative activities (cf. Abrahams 2015:1 footnote). Finally, the programmes evaluated generally had an explicit transformation agenda. The findings that follow should be interpreted in this light.

Findings

There is not a one-to-one correspondence between the bullets in the two columns; several of the listed competencies may have been observed in one project, and most competency needs were observed in more than one project (Table 1). Competency needs were identified through two kinds of observations during the authors’ participation in the design and implementation of M&E systems in the field, either that: ‘We are able to do this because we have this competency (in the M&E team)’ or ‘We are unable to do this because we lack this competency’. The years listed indicate the periods during which observations were made, rather than the duration of the project or its M&E.

In Table 2 the observations in Table 1, column 2, are framed as evaluator competencies and grouped using the T-R-T framing.

TABLE 2: Summary of competencies identified using the technical–relational–transformational framing.

What follows is an explication of findings in Tables 1 and 2.

The authors’ observations confirmed the importance of technical competencies, which is well established and documented elsewhere, for example in evaluator competency lists drawn up by SAMEA, the American Evaluation Association (AEA 2018) and the Canadian Evaluation Society (CES 2018). Evaluators are technically competent when they can design and implement standard M&E procedures and practices that are tried and tested. In South Africa the website of the Department of Planning, Monitoring and Evaluation (DPME), for example, provides guidelines and templates for diverse standard evaluation types and procedures that comprise technicalities with which evaluators should be familiar. Observations also confirmed the importance of newer skills such as working with programme theory (popularised as ‘theories of change’) and conducting developmental evaluations. These were particularly important in the complex contexts of many of the programmes in Table 1, column 1, where M&E was required to support organisations in continuously learning from implementation (developmental evaluation) and in the process, refining how they understood their intervention and how it would result in the desired changes. This work, while technical in nature, also had a strong relational dimension (see below).

The need for relational competencies (row 2, column 2 in Table 1) was evident whenever evaluators applied their technical skills in the field, where they encountered a range of challenges requiring ‘relational’ responses. The authors observed that some evaluators were able to build trust, confidence, rapport and enthusiasm for M&E, whereas others gave up, or expressed a concern that engaging with programme staff would compromise objectivity and credibility. Evaluators with relational competence were asked to return and do more evaluations, as they were regarded as helpful contributors to a shared transformational endeavour. This achievement, however, cannot come at the cost of evaluator credibility. Funders and other stakeholders need to experience evaluations as credible. Thus, relational competence is not useful in the absence of technical competence to produce valid findings. Both were needed.

The combination of technical and relational competence was also evident in ‘theory of change’ work in which the authors participated. The evaluator had to lead implementers in articulating their programme theory; this required not only a deep understanding of the process and its purposes, as well as the stages through which to do it (technical competence), but also a sensitivity to the context and ability to draw often implicit insights from implementers (relational competence).

Findings regarding transformational competence are listed in the last row in Table 1. These relate to the evaluator’s ability to ‘come up with something new’, either findings or methods, and the ability to see the need for something new (anticipatory competence).

Based on feedback from implementers (e.g. in the RESILIM-O programme) it was clear that evaluation findings must provide new insights, if they are to be useful to implementers with an interest in learning from evaluation, rather than merely ‘ticking the box’ for the funder. In all the projects reviewed, that substantive knowledge of the field in which M&E was being done, proved essential for developing such new insights.

Substantive knowledge of the field or domain in which the evaluation took place also seemed to be required for evaluators to establish trust and credibility with programme implementers; to guide the context-sensitive development of theories of change and indicators; to design case study instruments; and to analyse findings with some depth.

A case vignette illustrates the role of domain knowledge. In the Tsitsa Project, programme staff invited the M&E officer along on a field trip of the ‘Wisdom Trust’, a group of experts in the transdisciplinary social–ecological sciences that form the substantive domain of this programme. It was seen as a valuable opportunity for the officer to learn about the ideas that shaped the programme, to support her in developing the indicators for monitoring and for conceptualising evaluative case studies. The field trip, however, proved less than helpful in this regard: the experts’ discussions assumed a lot of background knowledge which the M&E officer lacked, generally preventing her from gaining the requisite insights.

It was also noticeable, in the transdisciplinary contexts of the Tsitsa and RESILIM-O, that evaluators must be able to flexibly use a variety of data sources and ‘knowings’. Knowledge sources ranged from scientists to government officials to small-scale farmers, and opinions often varied among respondents. How do evaluators come to credible conclusions? How do they effectively combine quantitative information on numbers of people trained or numbers of hectares affected by the programme, with qualitative data on the nature of training or the nature of the changes effected on the land? How do they do so across scales? Implementers in the Tsitsa, RESILIM-O and SANParks projects, who used complexity-sensitive evaluation approaches for ‘on-the-ground’ learning, while also reporting upwards to funders or line departments, expressed difficulty in bringing different M&E approaches together for sense-making and learning across system levels. For the evaluators to do this, they needed knowledge and skills that could be called transformational competence as it not only transforms knowledge forms into new syntheses, but also transforms standard M&E methods focussed more on accountability, to hybrid methods that meet both accountability and learning needs.

Discussion

Table 2 does not show a comprehensive list of evaluator competencies. It however indicates an expanding range of competencies that evaluators seem to need in a changing world.

This article does not focus strongly on technical competencies, but this does not imply that they are unimportant, or in good supply. Podems et al. (2014) referred to a DPME survey which found technical competence (particularly evaluation competencies) to be in short supply in South Africa. Technical competence has however generally been well described, and the methods for developing them may be more well developed (see discussion on Table 3).

TABLE 3: Types of knowledge and associated intervention points.

Relational competence is also described in various forms in the literature. The competence lists drawn up by SAMEA, the AEA and the CES, for example, all make reference to interpersonal skills, communication skills and cultural sensitivity. The relational competency needs observed in the field included but also exceeded interpersonal skills; it involved an ability to engage not only individuals, but to relate the engagement with individuals to a wider context, to understand and explain the programme and its M&E in the wider context, and to create a context in which stakeholders are motivated to collaborate and contribute. Evaluators drew on relational competence to work with programme staff to build and strengthen the culture of learning in organisations. The evaluator had to ‘read’ the multi-layered organisational context and relationally respond to it. Garcia (2016) conducted an in-depth study on interpersonal competence and situational awareness. The relational competence described here perhaps comprises a combination of interpersonal skills and situational awareness.

Although the competencies described above are generally represented in SAMEA’s and other competency frameworks, the framing proposed here creates room for transformational competence. This includes the ability to gain new insights from evaluation findings, as well as the ability to see the need for and develop alternative evaluation methods and processes. This competence includes a commitment to credible empirical findings rather than either empiricism or relativity. The authors propose that it requires a grasp of depth ontology (sensu Bhaskar’s critical realism, see ed. Bhaskar 2010) that enables integration across multiple layers of reality, from personal meanings and social constructs to physical realities, and to use a variety of M&E methods coherently. These transformational competencies are not represented in evaluator competency lists from South Africa, the USA or Canada.

A possible reason for the absence of transformational competencies from these lists may be the lack of a strong focus on, or the lack of evaluator experience with, developmental evaluation approaches. Developmental evaluation, as a response to complexity (Patton 2010), brings M&E much more closely into organisational management, planning and governance so that the evaluator is a member of the ‘core’ team, and M&E is more integrated into the functioning of the organisation (Dozois, Langlois & Blanchet-Cohen 2010). This requires more emphasis on relational and transformational skills, which need to be applied on an ongoing basis. The evaluator must be able to play the role of ‘critical friend’ and raise difficult questions without jeopardising the relationship. There is also an increased need for leadership and advocacy by the evaluator, to build capacity for evaluative thinking over time, create a sense of ownership and shift the responsibility for making meaning from the evaluator to the whole team (Dozois et al. 2010).

Monitoring and evaluation methods and approaches that respond to complexity, and generate multiple data types, are not unanticipated in the literature. However, they do not currently feature strongly in short courses or higher education programmes for evaluators, and even when they do, much is still to be worked out in practice. An ability to see and understand the need for complexity-sensitive M&E, and in particular, the ability to design, implement and adapt such approaches in the field, emerged as a very significant evaluator competence required at this time, alongside the technical and relational competences that are also essential but perhaps more established.

The professional judgement or discernment that is required to effectively conduct M&E in the absence of blueprints, in complex contexts, fall in a category that Freidson (2001) called a professional logic (distinct from the logics of the market and bureaucracy). This links to Podems’ contention that evaluation is an art. Although the authors would suggest that it is also a science, the implications of the metaphor of art, which implies high levels of skill and judgement on the part of the professional evaluator, certainly seem apt.

This might suggest that the findings shared in this article pertain mostly to the senior evaluator, or evaluation leader, but the authors have also observed the need for transformational and relational skills for the early career evaluators in an evaluation team, given that they are often the interface with the programme implementers.

The clustering of competencies in groups, here labelled as technical, relational and transformational, is significant, and so is the intertwining of knowledge, values and skills, as competencies. This should be compared to lists of competencies, or lists of knowledge, skills and values, described by Wiek et al. (2011) as ‘laundry lists’. There are several reasons to be cautious about ‘laundry lists’ or discrete bits of knowledge, skills and values. They lend themselves to narrow interpretations of how evaluations should be done, and of who should be selected to do evaluations. Podems (2014) commented on the gatekeeping role of competency lists, when they become bureaucratised, that produce the risk of keeping out not only some evaluators, but also different perspectives on evaluation. The competency approach in general has been criticised by educators who point to the holistic way in which professionals (such as educators) use their knowledge, values and skills in their professional practice, and that this is lost when atomised. Wiek et al. (2011), too, were concerned about the limited value of the competency list approach and hence proposed the ‘clustering’ approach, which was followed in the GELA and in this article. It is the view of the authors that the ‘cluster’ better reflects the way in which knowledge and skills from different domains, and dispositions, are deployed by evaluators. Above all, the combination of competencies was vital.

How to develop these competencies – Education recommendations

The findings suggest that evaluators need an expanding range of competencies, not just technical and interpersonal, but more broadly relational, as well as transformational. What does this mean for the universities and other training providers who offer degrees and short courses for novices and experienced evaluators?

Garcia (2016) recommended the use of case study-based learning to expose students to a diversity of contexts and to challenge them to consider the complex contexts of programmes, as well as role play to explore how they might respond in various situations, together with self-reflection. She also suggested that field work, for example during internships, would be valuable and noted that this is seldom a part of the professional training of evaluators.

The leadership development work of Scharmer, whose T-R-T framework was adapted for the GELA research, is instructive. Scharmer (2009) argued that education and training providers are most familiar with the training of individuals to produce technical knowledge (see Table 3), that is, the bottom left-hand box of individual technical skills training and capacity building.

Scharmer (2009) made the point that professionals in leadership positions need to work with others to solve complex problems. Based on the authors’ observations in the field, the evaluator, in particular the senior evaluator, is such a leader, who needs to work with funders, government officials and programme implementers, as well as fellow evaluators, to produce evidence and insights to guide complex programmes. Scharmer (2009) suggested that to build relational knowledge, leaders need opportunities to engage in dialogues with other stakeholders (the middle box on the bottom row of Table 3) as learning opportunities.

To build transformational competence, Scharmer suggested involvement in multi-stakeholder innovations. That is, engaging students in real-life case studies where they tackle a complex, practical and intellectual problem with other professionals and broader stakeholders. Such opportunities come to evaluators ‘on the job’, but increasingly, universities are starting to build such opportunities into under and post-graduate teaching. Examples are change projects (Lotz-Sisitka & Pesanayi 2020) and challenge labs (Rosenberg 2020).

This may require innovations in assessment practices, so that students are not always assessed as individuals, but also as groups, placing high demands on the development of relational knowledge (how to achieve a desired result together despite different motivations, skills levels; optimally using different skills and insights in the group, etc.). It is also suggested that evaluation students be taught metatheory that will allow them to integrate across multiple knowledge types. An example of such metatheory is social and critical realism based on the depth ontology developed by Bhaskar, who also applied it in the context of climate change (ed. Bhaskar 2010).

Conclusions

In simple contexts, with few variables that can be easily manipulated, a small set of technical skills, interpersonal skills like effective communications and a bureaucratic logic may suffice. In the changing world in which development contexts present as complex, open systems with multiple interacting variables and emergence (as in climate change programmes or pandemic responses) more is needed to ensure that M&E is able to support learning and a world that is changing for the better.

In complex contexts learning from M&E is critical, and theory-driven and developmental evaluations are prominent as programme developers and donors alike search for new ways to undertake M&E. In these contexts, evaluators need not only standard technical M&E competencies as well as relational competencies, but also transformational competencies.

Transformational competencies, generally not included in existing evaluator competency lists from South Africa and abroad, refer to the abilities that evaluators in complex contexts need to work effectively across knowledge boundaries (e.g. M&E expertise combined with knowledge of the substantive domain of the programme being evaluated); to design new but nonetheless credible evaluation methods where standard M&E approaches seem inadequate (e.g. they are not complexity-sensitive); to use multiple data sources and ‘ways of knowing’ with a sound understanding of ontology to draw credible conclusions; and to reflect and adapt on an ongoing basis. That is, they require evaluator reflexivity.

The reflections shared here, and the framework on which they were based, broadened the authors’ understanding of the competencies evaluators need, necessitated by the dynamically shifting context in which we work, where long-standing assumptions about how we need to develop and how we need to evaluate development, are being challenged by awareness of ‘the shadow side of modernity’, which includes the ways in which the sciences – social, natural and economic as well as evaluation – have either contributed to social–ecological problems, or failed to address them.

Technical–relational–transformational competencies was found to be a useful framing for understanding and discussing evaluator competencies, as it allows for a good grasp of the complex, interconnected knowledge, skills and dispositions and full range of competencies that evaluators need. This in turn allows educators to think of the design of programmes that may usefully include opportunities to explore complex contexts and try out different approaches, including engagements with real-life challenges and with other role players.

Limitations

The findings should be considered in context. The T-R-T framework and identified competencies are perhaps most Would it be possible to space these words slightly further apart, please? transdisciplinary contexts. Its relevance in contexts like Public Health and Basic Education could be further explored.

A limitation of the competency framing applied here is that it individualises practice, which in the field is seldom the remit of isolated individuals alone. Although competency frameworks focus on what individuals should be able to do, most complex tasks – such as designing and undertaking a big evaluation – are undertaken by people in functional groups. Space does not allow for elaboration of this point here. The tendency of interpreting or presenting competency frameworks as a list of discrete skills residing in individuals is associated with assessment and accreditation practices, but need not shape the way in which educators design and deliver courses. That is, courses could create opportunities for learners to work together, support and complement each other as they would do in professional practice, learn from and with each other, and in the process develop the required sensitivities and skills that form part of relational competence, and the anticipation and discernment that contribute to transformational competence.

Acknowledgements

The authors thank the three anonymous reviewers for their feedback, which has improved the manuscript.

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

E.R. was the lead author who led the analysis and the writing of the article, which originated with shared conversations in the field. K.K. added more analysis from the field and situated the article more strongly in the literature. Both authors agreed to the final version.

Ethical considerations

This article followed all ethical standards for research without direct contact with human or animal subjects.

Funding information

This research received no specific grant from any funding agency; the original projects that were received had numerous funding sources which are named in the methodology section of the article.

Data availability

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

Abrahams, M.A., 2015, ‘A review of monitoring and evaluation in South Africa: Monitoring and evaluation as a profession, an industry and a governance tool’, African Evaluation Journal 3(1), Art #142, 8 pages. https://doi.org/10.4102/aej.v3i1.142

American Evaluation Association, 2018, ‘AEA competencies’, viewed 20 February 2020, from https://www.eval.org/page/competencies.

Beck, U., 1992, Risk society: Towards a new modernity, Sage, London.

Bhaskar, R. (ed.), 2010, Interdisciplinarity and climate change: Transforming knowledge and practice for our global future, Routledge, London.

Botha, L., Kotschy, K. & Rosenberg, E., 2017, Participatory monitoring, evaluation, reflection and learning (PMERL) framework for the NLEIP Programme, Rhodes University Environmental Learning Research Centre, Makhanda/Grahamstown.

Britt, H., 2016, Discussion note: Complexity-aware monitoring, USAID, Washington, DC.

Brouselle, A. & Buregeya, J., 2018, ‘Theory-based evaluations: Framing the existence of a new theory in evaluation and the rise of the 5th generation’, Evaluation 24(2), 153–168. https://doi.org/10.1177/1356389018765487

Canadian Evaluation Society (CES), 2018, Competencies for Canadian evaluation practice, viewed 20 February 2020, from https://evaluationcanada.ca/txt/2_competencies_cdn_evaluation_practice.pdf.

Colvin, J., Rosenberg, E. & Burt, J., 2017, ‘Resilience monitoring, evaluation, assessment and learning (MEAL)’, Presentations and Workshop at the Resilience for Development Colloquium 2017, Johannesburg, May 08–10, 2017.

Dozois, E., Langlois, M. & Blanchet-Cohen, N., 2010, DE201: A practitioner’s guide to developmental evaluation, J.W. McConnell Family Foundation and International Institute for Child Rights and Development, Victoria, British Columbia.

Freidson, E., 2001, Professionalism: The third logic, University of Chicago Press, Chicago, IL.

Funnell, S.C. & Rogers, P.J., 2011, Purposeful program theory: Effective use of theories of change and logic models, John Wiley, New York, NY.

Garcia, G.L., 2016, ‘Understanding and defining situational awareness and interpersonal competence as essential evaluator competencies’, PhD thesis, Department of Educational Psychology, University of Illinois at Urbana-Champaign.

Human, H., 2019, ‘Making the case for M&E that reduces stress and uncertainty and increases commitment and creativity’, Presented at the 7th Biennial Conference of SAMEA, Johannesburg, October 23–25, 2019.

Kurtz, C.F. & Snowden, D.J., 2003, ‘The new dynamics of strategy: Sense-making in a complex and complicated world’, IBM Systems Journal 42(3), 22. https://doi.org/10.1147/sj.423.0462

Lotz-Sisitka, H. & Pesanayi, T., 2020, ‘Formative interventionist research generating iterative mediation processes in a vocational education and training learning network’, in E. Rosenberg, P. Ramsarup & H. Lotz-Sisitka (eds.), Green skills research: Methods, models and cases, pp. 157–174, Routledge, New York, NY.

Mudau Mushwana, V., 2020, ‘Three case studies of learning and reporting in natural resource management programmes’, MEd study, Department of Education, Rhodes University.

O’Donoghue, R., 1986, ‘Environmental education and evaluation: An eleventh hour reconciliation’, Southern African Journal of Environmental Education 3(1986), 18–21.

Partnership for Action on Green Economy, 2016, Green economy learning assessment South Africa: Critical competencies for driving a green transition, DEA, DHET, UNITAR and Rhodes University, Government Printers, Pretoria.

Patton, M.Q., 2010, Developmental evaluation: Applying complexity concepts to enhance innovation and use, Guilford, New York, NY.

Pawson, R. & Tilley, N., 1997, Realistic evaluation, Sage, London.

Podems, D., 2014, ‘Evaluator competencies and professionalizing the field: Where are we now?’, Canadian Journal of Program Evaluation 28(3), 127–136.

Podems, D., Goldman, I. & Jacobs, C., 2014, ‘Evaluation competencies: The South African government experience’, Canadian Journal of Program Evaluation 28(3), 71–85.

Raworth, K., 2017, Doughnut economics: Seven ways to think like a 21st-century economist, Random House Business, London.

Rockström, J.W., Steffen, K., Noone, Å., Persson, F.S., Chapin, III, E., Lambin, T.M. et al., 2009, ‘Planetary boundaries: Exploring the safe operating space for humanity’, Ecology and Society 14(2), 32.

Rogers, P.J., 2008, ‘Using programme theory to evaluate complicated and complex aspects of interventions’, Evaluation 14(1), 29–48. https://doi.org/10.1177/1356389007084674

Rosenberg, E., 2017, ‘Methodology for less harmful, more helpful evaluation in natural resource management programmes in South Africa’, Presented at the International Conference for Realist Research, Evaluation and Synthesis, Brisbane, October 25, 2017.

Rosenberg, E., 2019, ‘Making light of a heavy topic: Monitoring, evaluation and learning’, Keynote presented at the Garden Route Interface Meeting, SANParks, Rondevlei, September 17, 2019.

Rosenberg, E., 2020, ‘Framing learning needs assessments for sustainability policy practices’, in E. Rosenberg, P. Ramsarup & H. Lotz-Sisitka (eds.), Green skills research: Methods, models and cases, pp. 128–142, Routledge, New York, NY.

Rosenberg, E., Lotz-Sisitka, H. & Ramsarup, P., 2018, ‘The green economy learning assessment South Africa: Higher education, skills and work-based learning’, Higher Education, Skills and Work-Based Learning 8(3), 243–258. https://doi.org/10.1108/HESWBL-03-2018-0041

Rosenberg, E., Pollard, S., Du Toit, D., Graf, J., Retief, H., Kong, T. et al., 2015, ‘Interactive half-day session on monitoring, evaluation, learning & reporting framework for RESILIM-O’, Presented at Southern African Programme on Ecosystem Change and Society (SAPECS), Stellenbosch, 04 November 2015.

Scharmer, O., 2009, ‘Ten propositions on transforming the current leadership development paradigm’, Prepared for the World Bank Round Table on Leadership for Development Impact, World Bank Institute, Washington, DC, September 27–28, 2009.

Smith, L., Pophiwa, N. & Tirivanhu, P., 2019, ‘Introduction’, in C. Blaser Mapitsa, P. Tirivanhu & N. Pophiwa (eds.), Evaluation landscape in Africa: Context, methods and capacity, pp. 1–17, SUN Press, Stellenbosch.

Wiek, A., Withycombe, L. & Redman, K., 2011, ‘Key competencies in sustainability: A reference framework for academic program development’, Sustainability Science 6(2), 203–208. https://doi.org/10.1007/s11625-011-0132-6


 

Crossref Citations

1. Assessing South Africa’s Potential to Address Climate Change Impacts and Adaptation in the Fisheries Sector
Kelly Ortega-Cisneros, Kevern L. Cochrane, Nina Rivers, Warwick H. H. Sauer
Frontiers in Marine Science  vol: 8  year: 2021  
doi: 10.3389/fmars.2021.652955