About the Author(s)


Caitlin Blaser-Mapitsa Email symbol
Faculty of Commerce, Law and Management, School of Governance, University of the Witwatersrand, Johannesburg, South Africa

Citation


Blaser-Mapitsa, C., 2022, ‘A scoping review of intersections between indigenous knowledge systems and complexity-responsive evaluation research’, African Evaluation Journal 10(1), a624. https://doi.org/10.4102/aej.v10i1.624

Original Research

A scoping review of intersections between indigenous knowledge systems and complexity-responsive evaluation research

Caitlin Blaser-Mapitsa

Received: 18 Feb. 2022; Accepted: 01 June 2022; Published: 22 July 2022

Copyright: © 2022. The Author Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Acknowledging the need to transform the evaluation sector in Africa, locally generated approaches have been a recent area of contestation for both researchers and practitioners. Whilst the need for an African evaluation approach has been well established in the literature, there are still significant gaps in a proactive response. One of these gaps is the role of indigenous knowledge systems in these evaluation approaches. Indigenous knowledge systems have been a priority research area for decades, often in fields of science and technology, education and in research methods. Despite these strong overlaps with areas of interest to evaluators, there has been relatively little intersection between research on evaluation systems and that on indigenous knowledge systems.

Objectives: This article brings together these two areas of research to see what lessons for African-rooted evaluation approaches emerge from the body of research on indigenous knowledge systems.

Method: To do this, a scoping review was conducted, applying a thematic analysis to literature identified for inclusion in the study.

Results: This study found that there is considerable scope for the evaluation sector to draw on indigenous knowledge systems research, particularly drawing on process and methodological lessons from designing studies, as well as defining power dynamics and critical systems approaches.

Conclusion: This analysis can contribute to a needed debate about how to define and promote localised, contextually relevant evaluation tools and methods. It can also contribute to building a research agenda around African evaluation approaches.

Keywords: evaluation; critical systems; indigenous knowledge systems; research methods; applied research; transforming development.

Introduction

The development evaluation sector has had long-standing debates around what decoloniality means for the sector (Chilisa & Mertens 2021; Omosa et al. 2021). In Africa, this has been because of the neocolonial nature of the development industry in general and what it means for evaluation to operate primarily in this donor-driven context (Auriacombe & Cloete 2019; Robinson 2021). This has made reliable and valid evaluation findings difficult to obtain in a context where evaluation practitioners often know little about the local culture, political system and developmental context within which a programme is being implemented (Tirivanhu & Mapitsa 2019).

One of the results of the neocolonial nature of development is that many development organisations from the Global North have operational and procurement systems which incentivise the appointments of lead evaluators from their country of origin (Denny-Smith et al. 2019; Kithatu-Kiwekete & Phillips 2020; Ngwabi & Wildschut 2019). This relegates local evaluators, who may be able to provide leadership around contextually appropriate framing for an evaluation to data collectors within evaluations that have already been defined in scope and direction (Uwizeyimana 2021). Linked to this is a long-standing debate within the evaluation sector between those who see evaluation as a neutral field of measurement, and promote a technicist approach to evaluation (Mouton 2007), and those who argue that evaluation is an inherently political practice (Abma 2006; Raimondo 2018; Wilson & Howcroft 2005).

The latter school of thought is aligned with researchers working on indigenous knowledge systems in that they are both premised on a critical systems starting point as fundamental to understanding how a programme, project, research or data collection process will be framed (Lub 2015). Given the linkages between evaluation and programme planning and design, programme managers often see evaluations as a tool to shift the paradigms espoused by development projects (Guyadeen & Seasons 2018). With that in mind, many evaluators believe that evaluation can be a lever for transformative change in this context, in particular, through evaluations that are designed through the paradigms of the intended beneficiaries of development interventions (Chaplowe & Hejnowicz 2021). However, both practitioners and scholars in the sector have struggled to articulate what the key processes and characteristics of these approaches look like. This has many causes, which will be discussed in more detail below. Salient definitions of a ‘Made in Africa’ evaluation approach centres on power and hegemony in their discussions, but also stops at describing the problems of articulating and implementing such an approach (Mbava & Chapman 2020; Omosa 2019).

With these commonalities in mind, this research explores the intersections between research on indigenous knowledge systems and ‘Made in Africa’ evaluation practice. It considers what lessons research in the field of indigenous knowledge systems holds for defining a ‘Made in Africa’ evaluation approach, as well as the possibilities and limitations of promoting more synergy between the two fields of enquiry.

This article carries out a scoping review of African research on indigenous knowledge systems over the past decade and explores lessons that the evaluation sector can take from research on indigenous knowledge systems. Through creating stronger linkages between these two areas of discourse, the article aims to contribute to a wider agenda to build evaluation approaches that engage meaningfully with indigeneity and the potential of evaluation to pivot development planning in the direction of transformative paradigms. This article does not describe, synthesise or present the content of these knowledge systems, which might be an important step in moving from understanding the role of these systems to creating the actual tools and processes that would define their contributions to specific areas of evaluation.

Context of indigenous knowledge systems and evaluation

One of the ways in which evaluation aims to contribute to development is by influencing the paradigms, definitions of success and mental models programme staff members use to understand how a change happens (Ofir 2021). As development is increasingly viewed as a series of complex systems, evaluation has responded with the creation of tools and approaches that can navigate this context (Bamberger 2015). Since the foundation of development evaluation, there has been a focus on conceptual evaluation use, and how evaluators can facilitate changing the lenses through which programmes are planned, designed and implemented. This can appear in evaluation practice in a range of ways, from challenging neoliberal approaches the public sector may take to service provision to developing process steps that shift leadership in the evaluation process from commissioners of the evaluation to the intended beneficiaries of the programme being evaluated (Robinson 2021).

Whilst the study of indigenous knowledge systems is not new, both the interest in and volume of research have been growing as climate change has highlighted the potential of indigenous knowledge to contribute to more sustainable approaches (Petzold et al. 2021). This has been further compounded by the coronavirus disease 2019 (COVID-19) pandemic, which has demonstrated the interconnections between ecosystem knowledge, public health and the global economy (Sibanda & Ofir 2021). The radical social changes brought about by the pandemic have created a space for mainstream discussions about different ways of social, economic and environmental organising (Patton 2021).

Whilst there is no single definition of indigenous knowledge, most of the widely used definitions share three key features. The first is that indigenous knowledge has evolved over many generations and is historically rooted in a community. The second is that it exists in a certain cultural context. Whilst this can be difficult to define precisely, and difficult also to interpret through intersectional lenses of diversity, it is an important feature of indigenous knowledge. The third is that it is often geographically bounded in some way (Gadgil, Berkes & Folke 2021; Oloruntoba, Agolayan & Yacob-Haliso 2020). Whilst no populations are entirely homogeneous, nor entirely sedentary, historically, cultures have formed that have been linked to certain ecological systems, and values systems, history, spirituality and mythology have been impossible to disconnect from ecology, geography and place-making. Within the evaluation sector, there has been an increased interest in indigenous paradigms, particularly around the ways in which they frame the success of socio-ecological systems (Latulippe & Klenk 2020; Thompson Lantz & Ban 2020). Whilst indigenous knowledge systems research often engages in the complex power dynamics that influence the dynamics of uptake of exclusion of indigenous knowledge, it also seeks to contribute to a body of knowledge within the technical sector of focus, such as environmental science or education. It is this process of generating research with a sectoral focus that the evaluation community often fails to draw on whilst considering the lenses through which programmatic success can be viewed.

There has been a tendency for traditional knowledge to be viewed as ‘data’, rather than a different paradigm, and lifting specific pieces of information without the worldview from which they came makes it difficult to apply an evaluative lens appropriately (Casimirri 2003). Inappropriate approaches to locating or interpreting indigenous knowledge often lead to its rejection because it has been placed in an incompatible system (Tran, Takeuchi & Shaw 2009). As Casimirri (2003) highlights, ‘traditional ecological knowledge is only considered relevant when validated by Western science’. Early literature on indigenous knowledge often viewed it as discrete pieces of information, for example, about uses of certain plants or of meaning-making in weather patterns (Konadu 2007). Over time, the ‘systems’ piece of indigenous knowledge has become increasingly important, and the discrete pieces of information have been replaced by an understanding that this information is located in a worldview which encompasses ethics and values, as well as institutional and technical components. Figure 1 illustrates what this systems approach may look like, including interconnected components of data, ethics and values, governance and management. This is an important reminder that a specific piece of indigenous knowledge cannot be directly imported into a Western academic of evaluative paradigm and assessed out of the context of original submissions. Rather, significant work is needed to ensure that this information is interpreted appropriately and located within this system.

FIGURE 1: Indigenous knowledge systems.

A similar shift has taken place in the same time period in the literature on evaluations. Whilst evaluation practice was initially seen as a discrete entity, over time research on these evaluations evolved from considering individual evaluations, to systems of evaluation practice that include multiple stakeholders and the relationships amongst them (Figure 2).

FIGURE 2: Diagnosing evaluation capacity in Africa.

Whilst neither of the models above are exhaustive in their reflections of indigenous knowledge systems and evaluation systems, respectively, and whilst they are not identical in scope and focus, they do demonstrate the considerable overlap in the ways both systemic approaches link the technical aspects of data collection and observation to broader systems of values and culture, through institutional mechanisms of management and governance. The two systems differ somewhat in scale and intent, but both grapple with common issues of working across sectoral areas and balancing multiple stakeholder viewpoints. If any central difference could be extracted, it would be that indigenous knowledge systems research focuses more on the processes of knowledge generation, whilst evaluation systems research focuses more on the processes of use. Both, however, are concerned about the relational aspects of knowledge and the power dynamics within the respective fields. Perhaps a second difference worth noting is that much research on indigenous knowledge systems is located from the perspective of holders of indigenous knowledge, who are often critiquing unequal power dynamics from a perspective of lived experience (McGloin 2015; Cornielje 2021). Evaluation research, on the other hand, often critiques hegemony from the positionality of a ‘neutral’ third party in the development space. This gives both areas of exploration a certain legitimacy in their voice, although there is much research that still needs to be conducted to better understand how positionality, identity, indigeneity and other factors are expressed in the applied research space (Tirivanhu & Mapitsa 2019).

Indigenous knowledge systems offer a wide range of approaches to framing socio-ecological systems in ways that challenge conventional models of growth and development. However, integrating them into contemporary evaluation practice is challenging for a range of reasons, including difficulties in accessing this knowledge, their diversity in scope, and often the application to contexts which can be different from that in which development programming takes place. Despite these challenges, they can still play an important role in guiding the way development is framed and understood. Perhaps most helpfully, the area of indigenous knowledge research that considers methods, ontologies and approaches to data collection can help the evaluation field centre these paradigms as part of a complexity-responsive systems approach to development.

Methods

This article adapts a scoping review methodology, with an aim to map research on indigenous knowledge systems that hold lessons for ‘Made in Africa’ evaluation. The scoping approach is designed to identify studies that focus on research about indigenous knowledge systems, which could include the research on the indigenous knowledge systems themselves, should the methodology be sufficiently explained and discussed, or could be papers that have an explicit focus on research paradigms, processes, approaches and techniques. The aim was not to fully review literature on the content of indigenous knowledge systems in Africa, which has already been done in a range of national-scale multistakeholder initiatives (Balogun & Kalusopa 2021), but rather to draw a small sample of studies that will allow for the identification of points of intersection between indigenous knowledge systems research and evaluation discourse in Africa.

The study adopted the five-stage framework of Arksey and O’Malley (2005) and applied it as described below. First, the research question was defined, and a brief protocol was developed that included inclusion and exclusion criteria in line with the question. Studies that did not either describe or reflect on the methods used were excluded, as were studies that were exclusively technical in nature, such as listing and describing local language names of plant species, for example.

A search was carried out on Google Scholar that included components of indigenous knowledge systems research in Africa, as well as on South Africa’s Department of Science and Technology database, to identify studies that had been conducted in English within the last 10 years. This was a first step in identifying relevant studies, although it initially yielded 168 000 studies. These were extracted into a comma-separated values (a filetype) (CSV) format and imported into Excel to draw a random sample of 250 studies. This number was chosen primarily because of practical limits of the author’s time and resources and also because the first 50 studies were reviewed in some detail to consider whether there was diversity in terms of geography, sector, gender of the author and methods covered. Two-hundred-fifty was identified as a more than adequate number to reflect diversity in all of these areas, as well as to provide sufficient data to map key debates and identify points of intersection.

Studies were reviewed by the author to see whether they contained the following relevant issues:

  • methodological lessons
  • considerations of power, relational approaches or stakeholder dynamics
  • conceptual or epistemic reflections
  • reflection on the nature or role of indigenous knowledge systems
  • considerations of indigenous knowledge use.

If the above issues were not present, they were replaced by the next available study on the randomisation list until 250 studies were filled which met the criteria (this required a reading selection of 732 studies). These were then appraised, based on the identification of lessons that are relevant to either evaluation systems or evaluation approaches, and were thematically coded. Based on emergent results, the data were charted to identify themes. This served to map the literature extracted and form the basis for the thematic analysis that informed the results and discussion below. Whilst thematic coding included the various types of diversity that were included in the study, a quantitative breakdown to reports by type, sector, theme, etc., has not been included in the article primarily because this review was not intended to be systematic, but rather to be sufficiently comprehensive to map out key points of intersections and synergy. Whilst the studies reviewed were sufficient to allow for this discussion, they are not intended to be either comprehensive or representative of the body of literature as a whole.

In addition to looking at research methods and data collection approaches as per the inclusion criteria above, there are certainly lessons evaluators could learn through an interrogation of the indigenous knowledge systems content itself. One area this study expressly did not address represents the large number of studies where evaluation work and research on indigenous knowledge systems have been conducted on overlapping or cognate technical fields, such as early childhood development, farmer extension programmes and more. Any one of these would merit a synthesis review as a promising area of future exploration, but the intention was not to consider the specific results of the studies in question, but rather to extract methodological lessons that may allow for either better integration of indigenous knowledge into evaluation process or at least consideration for the experience and conceptual lessons of research in this field. Research on indigenous knowledge systems and evaluations that have been conducted reflected similar sectoral distributions, with much research concentrated in education, health and agriculture. This coheres to the sectoral distributions of evaluation reports in the African Evaluation Database (Blaser Mapitsa, Tirivanhu & Pophiwa 2019). Because this review was not systematic and followed different search methods and inclusion criteria for the development of the African Evaluation Database, it would not be appropriate to directly compare a sectoral distribution. However, the anecdotal similarities suggest that there is significant potential for co-investigation within these technical areas.

Results

Synergy, alignment and contestations

What emerged from a thematic analysis of the studies was a very clear alignment and integration between literature on evaluation systems and indigenous knowledge systems. The greatest area of alignment was around similar concerns about power involved in both data collection tools and methods and also in the methodologies implied by taking a systemic lens to both fields – considering and including a reflection of the commissioning organisations, the identity of the researchers and the participants in the research and locating each individual study in a wider values system. These often manifest in discussions around elements of participation, stakeholder engagement and relational dynamics. There was a seemingly shared dilemma around how to address power dynamics in the field, including researchers or practitioners who are ‘outsiders’ in the data collection process.

There were also certain similarities in the structure of data in the field of indigenous knowledge systems and those obtained for the purposes of evaluation. In particular, both often engaged in a limited number of communities, without the ability to generalise results across a wider population. This has specific implications for the ways in which evidence is drawn on for use in policymaking or programmatic design. A range of synthesis tools, such as evidence reviews and knowledge gap maps, are currently a popular response to research bases which have not necessarily been systematised, and there is considerable scope for a shared investigation of the potential of these methods in both fields (Saran & White 2018).

Both fields struggle with issues of intellectual property and data ownership, although the drivers of this differ, and as a result, there is a divergence in the way this issue manifests in both fields. In evaluations, commissioning agencies often legally own the intellectual property of the evaluation, despite the evaluator often having a larger stake in knowledge production. In indigenous knowledge systems, indigenous knowledge holders often should hold the intellectual property rights to their knowledge, but the legal systems that would enable that are exclusionary in a variety of ways, limiting the ways in which these rights can be claimed (Lawson & Adhikari 2018). Similarly, Casimirri (2003) points to the problem that ‘the way knowledge is transferred may vary culturally and much TEK [traditional ecological knowledge] research requires that knowledge holders breach cultural codes about how and to whom new knowledge is given’.

Related to this is the stigma ‘research’ has for many indigenous people. In many communities, research is (Armatas et al. 2016):

[A] reminder to many indigenous peoples of colonial excesses including the exploitation, extraction, and assumed ownership of knowledge, and attempting to overcome this intellectual imperialism by blurring the lines between the two intellectual traditions (i.e. western science and IK [indigenous knowledge]) is unrealistic. (p. 7)

Whilst these areas of synergy between evaluation and indigenous knowledge systems research are promising for collaboration, there were also numerous points of difference between evaluation practice and indigenous knowledge systems research. Both have divergent target audiences, and the difference in intended use was important in locating the studies. Whilst both fields of research often have some advocacy objectives in addition to scientific enquiry, these were often directed to the academy, in the case of indigenous knowledge systems research, and at development industry practitioners and policymakers, in the case of evaluations.

What is discussed below is not intended to be a full review or synthesis of existing evidence, which would undoubtedly be interesting and valuable. Rather, it is a selection of a small number of studies that can illustrate certain themes and trends in research on indigenous knowledge systems, and within these, areas of synergy can be identified with the research agenda on African evaluation systems.

Need for an integrated approach

There are many factors that point towards a need for lessons from indigenous knowledge systems to be prioritised by complexity-responsive evaluation approaches. One of them is that an intersectional approach to rights in development demonstrates the need for power to be challenged from multiple perspectives (Hankivsky & Cormier 2011). A second factor is that there is a demonstrated need in research for diverse, complexity-responsive models to transform socio-ecological systems (Biggs et al. 2021). Finally, there is a unique opportunity right now created by the convergence of multiple crises. This has created a space for listening amongst many stakeholders in evaluation systems building.

Both development evaluation systems and indigenous knowledge systems are explicit about challenging power structures as part of a transformative agenda. In the field of evaluation, part of this evolution has been considering a number of cognate fields beyond evaluation practice, including evaluative thinking in management, for example. Research on indigenous knowledge systems often speaks about this expanded approach to evaluation. As Levac et al. (2018) frame this:

[W]hen undertaken in a way that does not merely ameliorate conditions of inequality, but redresses them, multi-epistemic scholarship changes not only how we work (our methods), and how we talk about or share our work (knowledge mobilization), but also how we exist as reflexive and relational beings. (p. viii)

Feminist research has, since its inception, been based on using tools and methodologies of research to transform power dynamics, which means it has important lessons for both areas of enquiry (Bamberger & Podems 2002; Weerawardhana 2018). Often in feminist evaluation research, it can be superficially challenging to pull apart a programme with planned results that are feminist in nature or benefit women and an evaluation methodology that can encourage feminist outcomes for programmes not specifically working towards gender justice-related results (Jansen van Rensburg & Mapitsa 2017). Similarly, there are many projects and programmes that aim to benefit indigenous communities or promote intersectional strengthening of socio-ecological systems. This is distinct from using a lens of indigenous knowledge to evaluate a programme that may have other technical or sectoral objectives. Both evaluation systems research and indigenous knowledge research grapple with separating an epistemological lens for approaching a topic, from the topic itself.

A second consideration is the current importance of framing evaluations in a way that is simultaneously contextually responsive and systematically framed (Reynolds et al. 2016). The confluence of climate change, COVID-19 and rampant inequality has meant that the intersecting and collective nature of global development problems must be centred on values-driven evaluation (Gullickson & Hannum 2019). Whilst indigenous knowledge systems cannot single-handedly solve this challenge, the contextual diversity from which indigenous knowledge emerges, combined with systemic worldviews being common in many approaches, means that this holds promise for being particularly relevant.

Finally, there is growing recognition, both within the evaluation community and more broadly, that traditional approaches to development, be they the system of international aid, more localised national development planning or even multistakeholder initiatives like the Sustainable Development Goals (SDGs), are failing to shift the underlying causes of poverty and inequality, let alone shift global trajectories around climate change (Forestieri 2020). Reconciliation requires mutual learning (Levac et al. 2018), and the context is currently open for this learning to take place.

Barriers to integrating indigenous knowledge systems research into evaluation

There are significant barriers to integrating indigenous knowledge systems research approaches into evaluation. Some of them are barriers that are linked to the characteristics of indigenous knowledge, and these pose a challenge to its use across sectors, not only to evaluations. Data quality is often described as having the following requirements: completeness, validity, timeliness, consistency and integrity (Sebastian-Coleman 2013). By its nature, data stemming from indigenous knowledge systems face challenges to meet data quality standards. Chronic under-investment and contestation by hegemonic powers have meant that indigenous knowledge is rarely complete. Linguistic and geographic barriers to collecting indigenous knowledge, in addition to the systemic exclusion of indigenous people from centres of knowledge production, mean that gathering indigenous data is rarely timely. The context-dependent nature of indigenous knowledge is not always consistent; Sharma (2021) refers to it as ‘the heuristic of the everyday’. This can be at odds with needs for comparability and compatibility of many kinds in data sets – linguistic, digital, legal, etc., and these are not often feasible for indigenous knowledge systems to meet, precisely because of the different purpose and paradigm for which these systems exist.

In addition to the factors above, there are additional barriers that are unique to an evaluation context. One of the more central challenges is that evaluations are often specifically designed with a use focus in mind (King & Alkin 2019). This means that the paradigms and worldviews of the commissioning organisations are centred on evaluation planning processes, and these are often at odds with indigenous worldviews. Furthermore, many people and organisations experience ‘confirmation bias’ and tend to be more willing to accept evaluation results that reflect their pre-existing worldviews (Dickinson 2020). This makes it particularly challenging for marginalised paradigms to be taken up more broadly (Castleden et al. 2017) and requires, as Wilson (2008) says, ‘deep listening and hearing with more than the ears’.

Pre-existing notions of which paradigm should lead the evaluation agenda are not only built into evaluation planning processes, they are often institutionalised bureaucratically in development organisations. Notions of who should be an evaluator and what kinds of organisations hold the legitimacy to bid for evaluation projects often hold back indigenous evaluators from leading the thought process on evaluations, even when these people are part of the community of intended beneficiaries (Cram & Mertens 2016; Van Rensburg & Loye 2021).

A further difficulty in integrating indigenous knowledge systems research into evaluation arises from their common ‘sectoral agnosticism’. Whilst both fields hold a certain transdisciplinarity and focus on contextually relevant methods in common, there are also disciplinary norms and contextual diversity that make it difficult to share lessons, methods, approaches and paradigms across individual topical and geographic areas. This was described in a recent meta-analysis of studies on indigenous knowledge in climate adaptation that described the work on the subject within the Intergovernmental Panel on Climate Change (IPCC) as ‘regionally heterogeneous and thematically generic’ (Petzold et al. 2020), highlighting the challenges of building a deep understanding of highly localised beliefs and practices through the aggregation of multiple local approaches. Similarly, Sharma (2021:35) points to the challenges of engaging with the variations in indigenous knowledge by articulating the challenge of neocolonial knowledge production ‘as a uniform system of oppression rather viewing it as dispersed practices that have produced hierarchies and silences’. Various scholars have defined and aggregated these approaches in different ways, but they have been consistently difficult to scale or apply across contexts, precisely because they are so embedded in individual cultures, programmes or situations (Gaotlhobogwe et al. 2018).

And finally, one of the biggest challenges to integrating indigenous knowledge systems into evaluation is that many systems of knowledge are virtually, by definition, difficult to access, simply because of their nature. As shown in Figure 3, the characteristics of indigenous knowledge remain the same whether this knowledge is mobilised for research or evaluation. This knowledge may exist in languages that are not often translated or do not exist in written form. They may be deeply embedded in a cultural context that has not been described in detail, and they may be held by communities and difficult to extract from their unique contexts. In fact, researchers who are not from the community may not even know what form the knowledge takes, whether it is through cultural rituals, oral histories, various forms of art, written archives, intergenerational lessons and more. Even if this is known, these diverse forms of knowledge vary in the ease of capture and translation into academic norms. These challenges are not insurmountable, and many of the studies provide strategies, tools and approaches for carrying out research despite these challenges. For example, ‘[t]he absence of documentation which seems to be regarded as a setback for perpetuation of IKS may be resolved through audio and video tapes’ (Hlalele 2019).

FIGURE 3: Contextual challenges to the integration of indigenous knowledge into research and evaluation systems.

Considerations for theory and practice

Research on indigenous knowledge systems has the potential to contribute significantly to ongoing exploratory research about African evaluation methods and approaches. There is a clear need for further synergy and joint exploration. This will be made easier by the number of shared views around power imbalances in current development practice, an acknowledgement that most formal structures of knowledge generation are exclusionary and a commonly voiced critique over the exclusion of indigenous people, as well as people with other intersectional identities, not only from formal institutions of knowledge generation but also from institutions where this knowledge is used for decision-making (Sylvain 2014).

What both disciplines bring are a multitude of tools and approaches that can be used to centre and value indigenous knowledge in the research or evaluation process. These include well-documented methodological choices such as appreciative inquiry and participatory approaches. However, they go further than this to take a systemic view of the data collection process and intentionally interrogate the power dynamics at each node of this process. The identity of the research, entry points to the community, the selection of participants in research, the ownership of the data and so on – all these are areas that require more joint exploration by evaluators and indigenous knowledge systems researchers.

Concretely defining and advancing a ‘Made in Africa’ approach to evaluation will require some level of defining and synthesising a range of worldviews that hold certain common characteristics, some of which may be undefined or contradictory. This is a task that should be done with some care, because there are risks of dichotomising African and Western worldviews, which is certainly not the nature of contemporary knowledge in the region. Furthermore, it risks romanticising some possibly unknowable ideal that was likely developed to reflect a social and ecological order very different from the one that currently exists. Finally, it risks homogenising indigenous communities, which have contestations over power, voice, authority and narrative in their own contexts. The challenge of contextualising indigenous knowledge in the current evaluation landscape is one that should be approached with genuine listening (Latulippe & Klenk 2020).

There is a further risk that the process of accessing indigenous knowledge for the goals and objectives of strengthening the development evaluation sector will be extractive, however well-intentioned the process. The processes of co-production, intentional listening and building on indigenous research methodologies will be critical for mediating this risk (Chilisa 2019). This is framed by different texts in different ways. For Gillespie et al. (2020), ‘[a]ccountability is a core aspect of relationship and concerns how we respect and maintain balance in our relationships and honour the responsibility that comes with fulfilling our relationships’.

Mostly absent from the studies on indigenous knowledge systems, surprisingly, were reflective inputs on ethics. Whilst this is an area of considerable research focus globally, it has been significantly overlooked in Africa. There is scope for global exchange around this issue, but it should not be neglected.

Significant work has been conducted to describe methodologies and processes that are appropriate for culturally responsive evaluations, as well as indigenous methodologies that describe processes of access and description (Goforth et al. 2021). However, less has been done to reflect on how these methodologies have been applied in the specific community context of the study. Furthermore, rigorous comparative work would help build an evidence base on methodological appropriateness in different contexts, or for different purposes. Whilst the researcher identity is undoubtedly enabling for some purposes, it may be a barrier for others, similar to the ultimate use objectives of the study.

Finally, processes of participation and co-creation have been central to building an intersection between evaluation and traditional knowledge systems (Dahler-Larsen & Mbava 2019). Such processes would no doubt be even more promising at the intersection of evaluation and indigenous knowledge systems research. It would not only centre and legitimise indigenous knowledge in evaluation process, but it could also be a transformative lever within the evaluation space itself, allowing emerging evaluators, indigenous evaluators and others to contribute to evaluation debates.

Conclusion

Indigenous knowledge systems research cannot necessarily transform the evaluation landscape in Africa, but they have an important role to play in helping define a contextually relevant approach. The value it holds for evaluation research is clear, despite a range of barriers that pose a challenge to their application and uptake. Indigenous knowledge systems research is also uniquely located to contribute to the ongoing ‘Made in Africa’ evaluation discourse, which would benefit from a process of co-creation.

One of the most promising tools that can link indigenous knowledge systems research with the evaluation sector is the rich body of work that speaks about processes, methodologies and approaches that help researchers, whether indigenous or nonindigenous, access these knowledge systems. Given the challenges of describing, synthesising and applying such a wide range of paradigms and the risks that come with over-simplifying a diversity of ontological approaches, a process-based method that can map ways of engagement seems like a suitable pathway for the regional evaluation community to travel in the process of listening, learning and building decolonial approaches.

An area identified for further research is that of sector reviews that are carried out jointly in evaluation databases and those of evaluation practitioners across the region. Here, collaboration could move beyond methods, processes and approaches and look at synthesis work around different thematic areas. This could strengthen both areas of practice considerably and uncover additional information about key questions for both bodies of literature, including the complex relationship between the identity of the researcher, their relationship with the community in question and the research results themselves.

Acknowledgements

The author would like to thank the Centre for Learning on Evaluation and Results for acknowledging the importance of this topic, and supporting future research in the area. Additional thanks go out to Professor Bagele Chilisa, for laying the foundations in this area of work, and the School of Governance, for its support to building research communities.

Competing interests

The author declares that she has no financial or personal relationships that may have inappropriately influenced her in writing this article.

Author’s contributions

C.B-M. declares that she is the sole author of this research article.

Ethical considerations

This article followed all ethical standards for research without direct contact with human or animal subjects.

Funding information

This research received funding from Centre for Learning on Evaluation and Results - Anglophone Africa, as per the grant number that supported the Made in Africa special issue.

Data availability

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any affiliated agency of the author.

References

Abma, T.A., 2006, ‘The practice and politics of responsive evaluation’, American Journal of Evaluation 27(1), 31–43. https://doi.org/10.1177/1098214005283189

Arksey, H. & O’Malley, L., 2005, ‘Scoping studies: Towards a methodological framework’, International Journal of Social Research Methodology 8(1), 19–32. https://doi.org/10.1080/1364557032000119616

Armatas, C.A., Venn, T.J., McBride, B.B., Watson, A.E. & Carver, S.J., 2016, ‘Opportunities to utilize traditional phenological knowledge to support adaptive management of social-ecological systems vulnerable to changes in climate and fire regimes’, Ecology and Society 21(1), 16. https://doi.org/10.5751/ES-07905-210116

Auriacombe, C. & Cloete, F., 2019, ‘Revisiting decoloniality for more effective research and evaluation’, African Evaluation Journal 7(1), 1–10. https://doi.org/10.4102/aej.v7i1.363

Balogun, T. & Kalusopa, T., 2021, ‘Web archiving of indigenous knowledge systems in South Africa’, Information Development, 1–14. https://doi.org/10.1177/02666669211005522

Bamberger, M., 2015, ‘Management of complexity-responsive evaluations’, in M. Bamberger, J. Vaessen & E. Raimondo (eds.), Dealing with complexity in development evaluation: A practical approach, pp. 48–59, Sage, London.

Bamberger, M. & Podems, D.R., 2002, ‘Feminist evaluation in the international development context’, New Directions for Evaluation 2002(96), 83–96. https://doi.org/10.1002/ev.68

Biggs, R., De Vos, A., Preiser, R., Clements, H., Maciejewski, K. & Schlüter, M., 2021, The Routledge handbook of research methods for social-ecological systems, Taylor & Francis, New York.

Blaser Mapitsa, C. & Khumalo, L., 2018, ‘Diagnosing monitoring and evaluation capacity in Africa’, African Evaluation Journal 6(1), 1–10. https://doi.org/10.4102/aej.v6i1.255

Casimirri, G., 2003, ‘Problems with integrating traditional ecological knowledge into contemporary resource management’, Paper submitted to the XII world forestry congress, World Forestry Congress, Quebec, CA.

Castleden, H.E., Martin, D., Cunsolo, A., Harper, S., Hart, C., Sylvestre, P. et al. 2017, ‘Implementing indigenous and western knowledge systems (part 2): “You have to take a backseat” and abandon the arrogance of expertise’, International Indigenous Policy Journal 8(4), 1–18. https://doi.org/10.18584/iipj.2017.8.4.8

Chaplowe, S. & Hejnowicz, A., 2021, ‘Evaluating outside the box: Evaluation’s transformational potential’, Social Innovations Journal 5, 1–19.

Chilisa, B., 2019, Indigenous research methodologies, Sage, Newbury Park, California.

Chilisa, B. & Mertens, D.M., 2021, ‘Indigenous made in Africa evaluation frameworks: Addressing epistemic violence and contributing to social transformation’, American Journal of Evaluation 42(2), 241–253. https://doi.org/10.1177/1098214020948601

Cornielje, H., 2021, ‘How neo-colonial are we in what we are doing?’, Disability, CBR & Inclusive Development 32(3), 3–8. https://doi.org/10.47985/dcidj.533

Cram, F. & Mertens, D.M., 2016, ‘Negotiating solidarity between indigenous and transformative paradigms in evaluation’, Evaluation Matters-He Take Tō Te Aromatawai 2(2), 161–189. https://doi.org/10.18296/em.0015

Dahler-Larsen, P. & Mbava, N.P., 2019, ‘Evaluation in African contexts: The promises of participatory approaches in theory-based evaluations’, African Evaluation Journal 7(1), 1–9. https://doi.org/10.4102/aej.v7i1.383

Denny-Smith, G., Loosemore, M., Barwick, D., Sunindijo, R. & Piggott, L., 2019, ‘Decolonising Indigenous Social Impact Research Using Community-Based Methods’, in C. Gorse & C.J. Neilson, (eds), Proceedings of the 35th Annual ARCOM Conference, September 2–4, 2019 Association of Researchers in Construction Management, Leeds, UK, pp. 64–73.

Dickinson, D.L., 2020, ‘Deliberation enhances the confirmation bias in politics’, Games 11(4), 57. https://doi.org/10.3390/g11040057

Forestieri, M., 2020, ‘Equity implications in evaluating development aid: The Italian case’, Journal of Multidisciplinary Evaluation 16(34), 65–90.

Gadgil, M., Berkes, F. & Folke, C., 2021, ‘Indigenous knowledge: From local to global’, Ambio 50(5), 967–969. https://doi.org/10.1007/s13280-020-01478-7

Gaotlhobogwe, M., Major, T.E., Koloi-Keaikitse, S. & Chilisa, B., 2018, ‘Conceptualizing evaluation in African contexts’, New Directions for Evaluation 2018(159), 47–62. https://doi.org/10.1002/ev.20332

Gillespie, J., Albert, J., Grant, S. & MacKeigan, T., 2020, ‘Missing in action: Indigenous knowledge systems in evaluation of comprehensive community initiatives’, Canadian Journal of Program Evaluation 35(2), 1–19. https://doi.org/10.3138/cjpe.69099

Goforth, A.N., Nichols, L.M., Sun, J., Violante, A., Christopher, K. & Graham, N., 2021, ‘Incorporating the indigenous evaluation framework for culturally responsive community engagement’, Psychology in the Schools, 170–189. https://doi.org/10.1002/pits.22533

Gullickson, A.M. & Hannum, K.M., 2019, ‘Making values explicit in evaluation practice’, Evaluation Journal of Australasia 19(4), 162–178. https://doi.org/10.1177/1035719X19893892

Guyadeen, D. & Seasons, M., 2018, ‘Evaluation theory and practice: Comparing program evaluation and evaluation in planning’, Journal of Planning Education and Research 38(1), 98–110. https://doi.org/10.1177/0739456X16675930

Hankivsky, O. & Cormier, R., 2011, ‘Intersectionality and public policy: Some lessons from existing models’, Political Research Quarterly 64(1), 217–229. https://doi.org/10.1177/1065912910376385

Hlalele, D.J., 2019, ‘Indigenous knowledge systems and sustainable learning in rural South Africa’, Australian and International Journal of Rural Education 29(1), 88–100.

Jansen Van Rensburg, M.S. & Mapitsa, C.B., 2017, ‘Gender responsiveness diagnostic of national monitoring and evaluation systems-methodological reflections’, African Evaluation Journal 5(1). https://doi.org/10.4102/aej.v5i1.191

King, J.A. & Alkin, M.C., 2019, ‘The centrality of use: Theories of evaluation use and influence and thoughts on the first 50 years of use research’, American Journal of Evaluation 40(3), 431–458.

Kithatu-Kiwekete, A. & Phillips, S., 2020, ‘The effect of public procurement on the functioning of a national evaluation system: The case of South Africa’, International Journal of Social Sciences and Humanity Studies 12(1), 18–33.

Konadu, K.B., 2007, Indigenous medicine and knowledge in African society, Routledge, London.

Latulippe, N. & Klenk, N., 2020, ‘Making room and moving over: Knowledge co-production, indigenous knowledge sovereignty and the politics of global environmental change decision-making’, Current Opinion in Environmental Sustainability 42, 7–14. https://doi.org/10.1016/j.cosust.2019.10.010

Lawson, C. & Adhikari, K., 2018, Biodiversity, genetic resources and intellectual property, Routledge, pp. 1–8, Milton Park.

Levac, L., McMurtry, L., Stienstra, D., Baikie, G., Hanson, C. & Mucina, D., 2018, ‘Learning across Indigenous and western knowledge systems and intersectionality: Reconciling social science research approaches’, Unpublished SSHRC Knowledge Synthesis Report, University of Guelph, Geulph.

Lub, V., 2015, ‘Validity in qualitative evaluation: Linking purposes, paradigms, and perspectives’, International Journal of Qualitative Methods 14(5), 1–8. https://doi.org/10.1177/1609406915621406

Mangare, C.F. & Li, J., 2018, ‘A survey on indigenous knowledge systems databases for African traditional medicines’, in Proceedings of the 2018 7th international conference on bioinformatics and biomedical science, pp. 9–15, Association for Computing Machinery, Shenzen, China.

Mapitsa, C.B., Tirivanhu, P. & Pophiwa, N. (eds.), 2019, Evaluation landscape in Africa, African Sun Media, Stellenbosch.

Mbava, N.P. & Chapman, S., 2020, ‘Adapting realist evaluation for made in Africa evaluation criteria’, African Evaluation Journal 8(1), 11. https://doi.org/10.4102/aej.v8i1.508

McGloin, C., 2015, ‘Listening to hear: Critical allies in indigenous studies’, Australian Journal of Adult Learning 55(2), 267–282.

Mouton, J., 2007, ‘Approaches to programme evaluation research’, Journal of Public Administration 42(6), 490–511.

Ngwabi, N. & Wildschut, L., 2019, ‘Scandinavian donors in Africa: A reflection on evaluation reporting standards’, Evaluation Landscape in Africa–Context, Methods and Capacity 89. https://doi.org/10.18820/9781928480198/04

Ofir, Z., 2021, ‘Evaluation in transition: The promise and challenge of south-south development cooperation’, Canadian Journal of Program Evaluation 36(2), 120–140. https://doi.org/10.3138/cjpe.71630

Oloruntoba, S.O., Afolayan, A. & Yacob-Haliso, O., 2020, Indigenous knowledge systems and development in Africa, Palgrave Macmillan, Cham, Switzerland.

Omosa, O., Archibald, T., Niewolny, K., Stephenson, M.O. Jr. & Anderson, J., 2021, ‘Towards defining and advancing “Made in Africa Evaluation”’, African Evaluation Journal 9(1), a564. https://doi.org/10.4102/aej.v9i1.564

Omosa, O.O., 2019, ‘Towards defining’, Doctoral dissertation, Virginia Tech.

Parsons, M., Nalau, J. & Fisher, K., 2017, ‘Alternative perspectives on sustainability: Indigenous knowledge and methodologies’, Challenges in Sustainability 5(1), 7–14. https://doi.org/10.12924/cis2017.05010007

Patton, M.Q., 2021, ‘Evaluation criteria for evaluating transformation: Implications for the coronavirus pandemic and the global climate emergency’, American Journal of Evaluation 42(1), 53–89. https://doi.org/10.1177/1098214020933689

Petzold, J., Andrews, N., Ford, J.D., Hedemann, C. & Postigo, J.C., 2020, ‘Indigenous knowledge on climate change adaptation: A global evidence map of academic literature’, Environmental Research Letters 15(11), 113007. https://doi.org/10.1088/1748-9326/abb330

Raimondo, E., 2018, ‘The power and dysfunctions of evaluation systems in international organizations’, Evaluation 24(1), 26–41. https://doi.org/10.1177/1356389017749068

Reynolds, M., Gates, E., Hummelbrunner, R., Marra, M. & Williams, B., 2016, ‘Towards systemic evaluation’, Systems Research and Behavioral Science 33(5), 662–673. https://doi.org/10.1002/sres.2423

Robinson, N., 2021, ‘Evaluation warriorship: Raising shields to redress the influence of capitalism on program evaluation’, Genealogy 5(1), 15. https://doi.org/10.3390/genealogy5010015

Saran, A. & White, H., 2018, ‘Evidence and gap maps: A comparison of different approaches’, Campbell Systematic Reviews 14(1), 1–38. https://doi.org/10.4073/cmdp.2018.2

Sebastian-Coleman, L., 2013, ‘Data quality and measurement’, in Measuring data quality for ongoing improvement 39–53, Morgan Kaufmann, Waltham, MA.

Sharma, A., 2021, ‘Decolonizing international relations: Confronting erasures through indigenous knowledge systems’, International Studies 58(1), 25–40. https://doi.org/10.1177/0020881720981209

Sibanda, A. & Ofir, Z., 2021, ‘Evaluation in an uncertain world: A view from the Global South’, in D. Rob, van den Berg, Cristina Magro & Marie-Hélène Adrien (eds.), Transformational evaluation for the global crises of our times, p. 37–61.

Sylvain, R., 2014, ‘Essentialism and the indigenous politics of recognition in Southern Africa’, American Anthropologist 116(2), 251–264. https://doi.org/10.1111/aman.12087

Thompson, K.L., Lantz, T. & Ban, N., 2020, ‘A review of indigenous knowledge and participation in environmental monitoring’, Ecology and Society 25(2), 10. https://doi.org/10.5751/ES-11503-250210

Tirivanhu, P. & Mapitsa, C.B., 2019, Indigenising evaluation knowledge: Exploring the epistemic identity of African evaluators, Evaluation landscape in Africa, African Sun Media, Stellenbosch.

Tran, P., Takeuchi, Y. & Shaw, R., 2009, ‘Indigenous knowledge in river basin management’, in R. Shaw, A. Sharma et al. (eds.), Indigenous knowledge and disaster risk reduction: From practice to policy, pp. 45–58, Nova Science Publishers, New York, NY.

Uwizeyimana, D.E., 2021, ‘The need and feasibility of a separate Africa-rooted programme evaluation approach’, African Journal of Development Studies (Formerly AFFRIKA Journal of Politics, Economics and Society) 11(3), 101–120.

Van Rensburg, M.S.J. & Loye, A.S., 2021, ‘Young and emerging African evaluators’ need for gender responsive evaluation training’, African Evaluation Journal 9(1), 6. https://doi.org/10.4102/aej.v9i1.556

Weerawardhana, C., 2018, ‘Profoundly decolonizing? Reflections on a transfeminist perspective of international relations’, Meridians 16(1), 184–213. https://doi.org/10.2979/meridians.16.1.18

Wilson, S., 2008, Research is ceremony: Indigenous research methods, Fern-wood, Halifax, NS.


 

Crossref Citations

1. Made in Africa Evaluation
Mark Abrahams, Steven Masvaure, Candice Morkel
African Evaluation Journal  vol: 10  issue: 1  year: 2022  
doi: 10.4102/aej.v10i1.665