Abstract
Background: African development evaluation stakeholders express discontent with Euro-American evaluation methodologies and advocate for formulating evaluation theories and approaches rooted in African wisdom and values.
Objectives: This article focuses on generating wisdom and philosophical insights from Swahili proverbs, constructing a robust philosophy of evaluation and using emerging theoretical and methodological insights to guide development evaluations.
Method: Forty-five Swahili proverbs were analysed to uncover their philosophical insights and implications for development evaluation practices.
Results: Based on established philosophical insights, the evaluand is a single but multifaceted phenomenon; knowledge about the evaluand is possible through close and trusted relationships with people who have experienced it; adequate learning about and production of the credible history of the evaluand is possible in well-established humane relations; and the credible information and evidence on aspects of the evaluand are generated in well-managed processes of inquiry and assessment. These philosophical beliefs form the core of the Swahili evaluation approach, providing valuable guidance for development evaluators in selecting and engaging legitimate stakeholders, managing evaluation processes effectively, utilising diverse forms of knowledge and assessment criteria, and facilitating co-generation, co-learning, and co-validation of findings and reporting on project or programme successes, failures, and lessons.
Conclusion: The philosophy of evaluation underpinning the Swahili evaluation approach provides adequate guidance on the what, why and how of conducting development evaluations.
Contribution: This research contributes to the proverb-based approach to developing African-rooted evaluation theories and approaches by offering lessons on generating and applying philosophical insights to inform and improve development evaluation practices.
Keywords: African proverbs and wise sayings; Swahili proverbs and wise sayings; Swahili wisdom; African-rooted evaluation; Indigenous evaluation; people-centric evaluation; Democratic evaluation; Participatory evaluation; Philosophy of evaluation; Development evaluation.
Introduction
Development evaluation refers to the systematic and objective assessment of ongoing or completed development programmes or projects to determine their relevance, efficiency, effectiveness, impact and sustainability (Morra-Imas & Rist 2009; Organisation for Economic Co-operation and Development [OECD] 1991; United Nations Development Programme [UNDP] 2009). There is a growing interest in development evaluation in Africa, as evidenced by the rise in government organisations and private and non-government organisations conducting and using evaluations (Abrahams 2015; Porter & Goldman 2013). There has also been an increase in African governments developing monitoring and evaluation systems (Chirau et al. 2020), academic and professional evaluation programmes, and people studying development monitoring and evaluation (Basheka & Byamugisha 2015; Global Evaluation Initiative [GEI] 2024). Furthermore, there is an increase in voluntary organisations for professional evaluation (VOPEs) promoting evaluation culture in African countries.
Besides, there is a growing dissatisfaction with Euro-American evaluation paradigms, methods, criteria and standards as they have limitations in assisting people in carrying out evaluations that matter to them. These paradigms and methodologies need to capture intricate contextual issues and the needs and priorities of African people (Chilisa & Mertens 2021). Additionally, they often lead to inadequate assessments and wrong prescriptions (Chilisa et al. 2016; Jeng 2012), and their evaluation criteria and standards sometimes reflect the African realities (Chilisa 2015; Gaotlhobogwe et al. 2018).
Responses to the above discontents have included efforts to resist the blind borrowing of Euro-America-rooted paradigms, methodologies, criteria and standards to evaluate development interventions in Africa; to build capacity of African evaluators to enable them to carry out their evaluations; to promote and adapt evaluation tools, instruments, strategies, theories and models to ensure relevancy in African settings; and to develop novel evaluation theories and methodologies that emanate from African cultures and philosophies (Chilisa et al. 2016). The later response entails developing African-rooted evaluation theories and methodologies.
The possibilities of developing evaluation culture theories, approaches and methodologies rooted in the worldviews, value systems and ways of knowing of African people were powerfully articulated and supported by participants of the Special stream of the Fourth Conference of the African Evaluation Association (AfrEA) held in Niamey in Niger on 18 January 2007 (AfrEA 2007). Since then, the AfrEA has been promoting Africa-rooted evaluations and ensuring that African values and worldviews inform, guide and shape the theory and practice of development evaluation in Africa (AfrEA 2007).
In line with AfrEA’s efforts, some scholars have developed African-rooted evaluation theories and methodologies. For instance, Carroll (2008) devised an evaluation methodology and questions based on African world views; Muwanga-Zake (2009) designed an evaluation process based on the Afrocentric paradigm and Ubuntu philosophy; Chilisa and Malunga (2012) and Easton (2012) generated conceptual and theoretical frameworks in evaluation based on African proverbs and metaphors; Chilisa and Malunga (2012) constructed relational evaluation approaches based on values and knowledge systems of indigenous people; Mbava and Chapman (2020) developed the realist evaluation framework in line with the principles of Made in Africa Evaluations; and Chilisa and Mertens (2021) constructed the Indigenous Made in Africa Evaluation Framework based on the values, culture and ethics of indigenous people in Africa.
Nevertheless, the philosophies of evaluation in the above frameworks and approaches provide inadequate guidance on what, why and how we do development evaluations (Smith 2008). This calls for further investigation of people’s ontological, axiological, epistemological and methodological beliefs that adequately guide the knowing and judging development projects and programmes Mertens and Wilson (2019). The generated philosophical beliefs would enlighten us about the nature of what we evaluate, assigning value to what we evaluate and their performance, constructing knowledge and using knowledge from evaluations (Donaldson & Lipsey 2006:57).
I firmly believe that philosophical insights embodying African worldviews and collective wisdom can provide a basis ‘for program evaluation’s intent, motivation for the evaluation, expected outcomes, choice of methodology, methods and evaluation strategies or design and interpretation, and dissemination of evaluation findings’ (Chilisa et al. 2016:317). I also believe that wisdom in African proverbs can inspire and shape development evaluation practices in Africa. Hence, between 2020 and 2023, I collected and analysed 45 Swahili proverbs to generate philosophical insights embodying them.
The 45 Swahili proverbs contain the wisdom, morals and traditional views of Tanzania’s indigenous people regarding the world and its social, economic, and political issues and processes. The proverbs were collected and translated into Kiswahili during the 1970s and 1980s. They are taught and learned in formal and informal settings and are widely used by competent Swahili speakers. I analysed the 45 proverbs to uncover indigenous wisdom relevant to the process and practice of development evaluation. I established ontological, epistemological, methodological and axiological beliefs, which were instrumental in laying the philosophical foundation for the Swahili Evaluation Approach I created.
This article presents the Swahili Evaluation Approach as a valuable resource for those involved in designing and conducting development evaluations. It highlights the processes of generating philosophical insights from proverbial wisdom, constructing a robust evaluation philosophy, and using emerging theoretical insights and methodological guidelines to design and implement evaluations.
Ethical considerations
This article followed all ethical standards for research.
Methodology and theoretical framework
The 45 Swahili proverbs were collected from published sources and websites listed in the reference section (Methali n.d.; Methali za Kiswahili n.d.; Swahili Proverbs n.d.). The selected proverbs have messages about people’s participation in social, economic and political activities. The steps listed further in the text were followed for analysing, interpreting and applying the meanings and wisdom of these proverbs in development evaluation practices.
Firstly, the selected proverbs were divided into three groups. The first group includes proverbs with messages and guidance about involving people in social processes. The second group contains proverbs with messages about the knowledge, skills, rights and responsibilities of people participating in social processes. The last group of proverbs covers the conditions and criteria for selecting and engaging people in social processes.
Secondly, a content analysis was conducted on the proverbs from the above three distinct groups to uncover their meanings and wisdom. The uncovered wisdom: (1) emphasises generating credible evidence, (2) underscores the importance of impartial and evidence-based judgements, (3) insists on paying attention to indicators or pointers to specific issues, (4) highlights the importance of possessing competencies that facilitate proper inquiries and evidence generation, proper valuation and valuing, impartial and independent judgements and (5) urges people to: (a) keep their promises, (b) take preventive measures against possible failures and (c) seek help in redressing experienced difficulties.
Thirdly, the identified meanings and uncovered wisdom were interpreted and applied to various aspects of development evaluation practice in line with the guidance of Schwandt and Gates (2021) and Norris (2015). The meanings and wisdom of Swahili proverbs and aspects of the evaluation practice they promote are reported and discussed in Mazigo et al. (2024).
Fourthly, an additional content analysis was conducted on the uncovered wisdom to determine their underlying philosophical beliefs. This analysis was guided by the questions on generating philosophical beliefs for an evaluation approach proposed by Mertens and Wilson (2019). According to Mertens and Wilson (2019:38–46), a robust evaluation approach must have an axiological belief that responds to the question about the nature of ethics, an ontological belief that responds to the question about the nature of reality, an epistemological belief that responds to questions about the nature of knowledge and ways of knowing, and a methodological belief that responds to questions about gathering credible information about the phenomenon.
Mertens and Wilson’s questions were revised to better focus on the aspects of the evaluand. The revised questions included the following: What is the nature of the evaluand? (Ontological question); How do evaluators gain a better understanding of the evaluand? (Epistemological question); What ethical values and principles should guide interactions in knowing and judging the evaluand? (Axiological question); and What would facilitate the collection of credible information and evidence about the evaluand? (Methodological question). The generated beliefs were coded for ontological, epistemological, axiological and methodological beliefs.
Fifthly, the generated philosophical beliefs were analysed to establish their implications for designing systematic inquiries and engaging stakeholders in knowing, valuing and judging development projects and programmes. The following sections discuss the generated philosophical beliefs and their roles in supporting people-driven development evaluation practices.
Philosophical beliefs
Wisdom uncovered in the studied Swahili proverbs informed the framing of development evaluations as social activities and processes initiated by and involving people. People initiate and lead such social activities and processes to assess and judge promise-keeping in implementing and managing development projects and programmes; to learn, assess and establish preventive and corrective measures for ongoing development projects and programmes; and to collaboratively learn and construct histories of completed development projects and programmes (cf. Mazigo et al. 2024). The philosophical beliefs supporting these people-driven development evaluation practices are discussed further in the text.
Ontological belief
The evaluand is the object of evaluation. It can be a social activity, event, object, project, or programme. The ontological belief must provide insights into the nature of the object of evaluation (Donaldson & Lipsey 2006; Mertens & Wilson 2019). The ontological belief was located in wisdom in the proverbs Nyumba usiyolala ndani huijui hila yake [You cannot know the defects of a house you have not slept in] and Kitanda usicho kilalia hujui kunguni wake [You cannot know the bugs of a bed that you have not lain on].
Phenomena such as a house and a bed must exist and be experienced for their various aspects to be known and evaluated. However, personal experience of the same phenomenon can differ depending on factors such as time and seasons. For example, people sleeping in the same house during different seasons (rainy or dry season) may observe and record different defects. Similarly, someone who uses a bed during the day may not experience bugs in the same way as someone who sleeps on that bed at night, as the bugs become active at night.
From the above discussion, it is safe to conclude that people experience and know real phenomena differently, and every person’s shared evidence contributes to their better understanding. This wisdom points to the belief in a single but multifaceted phenomenon. Accordingly, an ongoing or completed development project or programme is a singular and objective entity existing in the real world that individuals experience differently.
Epistemological belief
The evaluation comprises systematic inquiries about, valuing and judging the evaluand. It was established earlier in the text that the evaluand is a single but multifaceted phenomenon that can be known. The questions about knowing the evaluand are epistemological. In the evaluation context, the epistemological belief must comprise appropriate responses to the question of how evaluators come to know the evaluand better.
The epistemological belief was identified in proverbs Matundu ya nyumba ayafahamu mwenye nyumba [Only the house owner knows holes in the house], Adhabu ya kaburi aijua maiti [The torture of the grave is known only to the dead] and Jogoo wa shamba hawiki mjini [A country rooster would not crow in town]. These proverbs convey the idea that the nuances and complexities of a phenomenon are best understood by those who have personally experienced it. According to this wisdom, only individuals with direct experience with the phenomenon can offer valid insights into it. Those who lack personal experience can still learn about the phenomenon from those who have experienced it firsthand.
Wisdom in these proverbs points to the belief that the phenomenon can be known through personal experience or through close and collaborative learning from people who have experienced and developed some knowledge about it. This epistemological belief suggests that knowledge of the evaluand is possible through trusted collaborative learning opportunities with people who have personal knowledge and experience of it. Consequently, an external evaluation expert must learn from these people to better understand, explain and report on various aspects of the evaluand.
Axiological belief
Evaluation is a social activity and process that must involve people. This fact was underscored in the proverb Shuguli ni watu [Social activity or event needs people]. We need to understand the meaning of ‘mtu’ and ‘watu’ to appreciate the guidance of this proverbial wisdom. The term ‘mtu’ (plural ‘watu’) refers to human beings’ rational nature and the personhood status (utu). Human rationality comprises intellect, which enables knowledge, and free will, which enables choice. As such, people possess the potential and the ability to learn and make decisions. Personhood is a status that humans earn by fulfiling their personal and communal responsibilities. Communities support their members’s pursuit of and development into personhood. Only people who use ‘their intellectual and moral capacities’ to ‘organize and creatively order their biological and social functions in the service of socio-culturally imposed goals’ attain full personhood (Mazigo 2021:130). Thus, people must utilise social, economic and political opportunities to enhance their personhood status.
The proverbial message that social activities and processes require people emphasises: (1) people’s ability to know and make choices and (2) the need to offer opportunities that allow people to fulfil their personal and communal duties to progress towards achieving personhood. As knowers, people can know the phenomenon they have experienced, and they can also engage in inquiries and generate knowledge about the phenomenon. As choosers, people make various choices, including engaging in inquiry and assessing aspects of a phenomenon, being impartial and objective inquirers and assessors, and choosing to be objective and fair judges. Individuals committed to pursuing personhood care about the common good, and they can initiate and lead processes to assess and determine what is best for themselves, others, and their communities or societies.
Furthermore, wisdom in some other proverbs recommends that two groups be involved in the evaluation process. The first group should consist of people with direct experience and knowledge of ongoing or completed development projects or programmes, such as the beneficiaries, implementers and funders. The involvement of such individuals is highly valued and emphasised in various proverbs, including Matundu ya nyumba ayafahamu mwenye nyumba [The house owner knows holes in the house], Nyumba usiyolala ndani huijui hila yake [You cannot know the defects of a house you have not slept in], Adhabu ya kaburi aijua maiti [The torture of the grave is known only to the dead] and Kitanda usicho kilalia hujui kunguni wake [You cannot know the bugs of a bed that you have not lain on].
The second group comprises external evaluation experts with adequate knowledge and skills to facilitate objective and systematic inquiries, impartial assessments, and objective judgements of development projects and programmes. They can do so without personal knowledge or experience with the project or programme. The involvement of skilled external experts to facilitate objective inquiries and impartial assessments is implied in Proverbs Aingiaye baharini kuogelea [Whoever enters the sea must swim], Nyani haoni kundule [The ape does not see his backside] and Anayejipiga mwenyewe halii [The person who hits himself does not cry].
If systematic inquiries about and generation of knowledge of the evaluand are possible through productive engagement with knowledgeable and experienced people, what ethical values and principles would best guide such interactions? This axiological question is meant to generate content of the axiological belief. Elements of axiological belief were found in proverbs about human beings and valued interactions in social, political and economic settings.
Ethical values and principles of respect, cooperation, solidarity, and collaborative working and learning were identified in the proverbs Shuguli ni watu [Social events need people], Penye wengi hapaharibiki neno [Where there are many people, nothing goes wrong], Vichwa viwili ni bora kuliko kimoja [Two heads are better than one head], Kidole kimoja hakivunji chawa [One finger does not kill lice], Pekee pekee hauwezi tunga historia [Alone alone, one cannot produce history], Mkono moja hauchinji ngombe [A single hand cannot slaughter a cow] and Mkono moja haulei mwana [A single hand cannot nurse a child]. Wisdom in these proverbs encourages cooperation, respectful interactions and productive participation in social activities.
Ethical values and principles encouraging the promotion of the common good and caring for and promoting the humanity of others were identified in proverbs Mwenyeji njoo mgeni apone [Let the guest come so that the host benefits or gets well] and Mgeni hachukui nyumba [The guest does not take over the house]. Wisdom in these proverbs urges guests to be helpful to their hosts and avoid depriving them of their rightful and entitled opportunities. Such wisdom warns the trusted external evaluation expert about depriving people of their various opportunities and entitlements.
The value of humility was uncovered in the proverbs Msafiri maskini ajapokuwa Sultani [A traveller is poor though he may be a sultan] and Jogoo wa shamba hawiki mjini [A country rooster would not crow in town]. Wisdom in both proverbs invites external experts to humble themselves to get help and better learn about local issues.
The above-identified ethical values and principles inspire the establishment of humane relations, which are the foundation for the collaborative learning about and co-production of a credible history of the evaluand. Accordingly, the evaluation facilitators who embrace those ethical values and principles must establish humane relations with diverse stakeholders and fulfil the relational obligations of caring for and promoting the welfare of every human being (Mazigo 2021).
It follows that evaluation ethics based on the wisdom in these proverbs inspire the establishment of respectful and productive interactions and the redressal of the challenges limiting the productive participation of some stakeholders in the evaluation process.
Methodological belief
If the evaluand is a multifaceted phenomenon that can be known and judged, what would facilitate gathering credible information and evidence about it? This is the question of methodological belief, which requires establishing conditions that guarantee the generation of credible information and evidence. Several proverbs elaborate on participants’ competencies, attitudes, and values in the evaluation process and the integrity of the inquiry, valuing and judging process.
Wisdom in the proverb Aingiaye baharini huogelea [Whoever enters the sea must swim] establishes the adequate competencies condition. Participants must possess adequate technical and social competencies to engage in systematic inquiries, valuing and judging the evaluand. The technical competencies facilitate systematic inquiries and objective assessments. Wisdom in proverb Asiyeuliza hanalo ajifunzalo [one who does not ask, does not have what he or she needs to learn]underscores inquiries skills covering asking questions, wisdom in proverb Umdhanie ndiye siye [The one you suspect is, is not] underscores skills in checking and establishing facts, and wisdom in the proverb Chanda chema huvikwa pete [A pleasant finger gets honoured with a ring] emphasises valuing skills involving determining merit and worth based on established criteria and standards.
Social competencies facilitate respectful and productive interactions among participants. External experts must demonstrate positive attitudes when interacting with other participants to facilitate inquiries and generate credible information and evidence. Wisdom in the proverb Penye nia pana njia [Where there is a will, there is a way] encourages determination, wisdom in the proverb Msafiri maskini ajapokuwa Sultani [A traveller is poor though he may be a sultan] encourages humility, wisdom in the proverb Mgeni njoo mwenyeji apone [Let the guest come so that the host benefit or get well] urges respect and care for the hosts to better learn about the evaluand from and with people who have experienced it.
Other Swahili proverbs establish the process integrity condition. This condition covers the selection of participants and the management of the process. As credible information is generated from credible sources, the lead of the evaluation process must select and involve people who have experienced the evaluand. This is emphasised in proverbs Matundu ya nyumba ayafahamu mwenye nyumba [The house owner knows holes in the house], Nyumba usiyolala ndani huijui hila yake [You cannot know the defects of a house you have not slept in] and Kitanda usicho kilalia hujui kunguni wake [You cannot know the bugs of a bed that you have not lain on].
Besides, wisdom in proverbs Manahodha wengi chombo huenda mrama [With many captains, the ship does not sail properly] and Wapishi wengi huharibu mchuzi [Too many cooks spoil the broth or sauce] wants the lead of the evaluation process to remain resolute in preventing possible damages. Objective inquiries and impartial assessments are highlighted in proverb Mlenga jiwe kundini, hajui limpataye [He who throws a stone in a crowd does not know whom it hits], due consideration of indicators or pointers to evidence is emphasised in proverbs Dalili ya mvua ni mawingu [The sign of rain is clouds] and Panapofuka moshi pana moto [Wherever smoke emits, there is a fire]. Facts checking, corroboration and validation of information to avoid errors are underscored in the proverbs Hakuna mti unaokosa chake [There is no tree without its fruit] and Umdhanie ndiye siye [The one you suspect is, is not].
Philosophical insights guiding development evaluations
According to Donaldson and Lipsey (2006), Smith (2008), and Mertens and Wilson (2019), a robust philosophy of evaluation must provide adequate guidance on what, why and how we do development evaluations. Aware of this role of the philosophy of evaluation, this section discusses the extent to which the established philosophical insights provide the required guidance.
Regarding the what of evaluation, the philosophical insights generated from Swahili wisdom guide the evaluation of ongoing or completed development projects or programmes. In light of the ontological belief, such projects and programmes are objective phenomena existing in the real world and experienced by people. However, people may know better aspects of the projects or programmes they had adequate access to and time to learn and experience. Some participants might better know the context, inputs, process, products and outcomes of development projects and programmes. Given this fact, development evaluators must be aware of and interrogate their participants about aspects of the development project or programme they have adequate knowledge and experience.
The philosophical insights generated from Swahili wisdom have adequately established the why of evaluation. According to wisdom in several Swahili proverbs, we must make inquiries and objective assessments of ongoing or completed development projects and programmes to establish and judge performances in promise-keeping; to learn, assess and establish preventive and corrective measures; and to collaboratively learn and document narratives to comprise histories of completed development projects and programmes. Firstly, development evaluators must facilitate generating and assessing evidence on performances in keeping promises made under the context, inputs, process and products of the ongoing or completed development projects or programmes. Secondly, development evaluators must facilitate the generation of evidence to support devising measures to prevent and correct actual and possible failures in implementing and managing development projects and programmes. Thirdly, development evaluators must facilitate generating evidence of successes, challenges and lessons to support collaborative learning and production of the history of the completed project or programmes.
The how of evaluation is addressed under the epistemological, axiological and methodological beliefs generated from Swahili proverbs. According to the established epistemological belief, knowledge about ongoing or completed development projects or programmes is possible in close and trusted relationships with people who have experienced them. Consequently, development evaluators must engage people with experience with the project or programme and pay attention to and engage with their ways of knowing, valuing, measuring and validating evidence about aspects of the project or programme.
The established axiological belief suggests that adequate co-learning about, co-generating evidence and co-producing credible histories of the completed development project or programmes happen in close and trusted relationships. This calls for development evaluators to establish close and trusted relationships with people who have experienced the project or programme.
The established methodological belief suggests that credible information and evidence on aspects of ongoing or completed development projects and programmes are generated in the well-established and well-managed processes of co-generation, co-learning and co-validation of evidence. In line with this methodological belief, development evaluators must cultivate and use appropriate competencies, positive attitudes and empowering values; promote co-learning and co-production of knowledge; be cautious and always in control of the processes to avoid possible damages; pay attention to indicators because they point to relevant evidence; and constantly check facts to avoid fake evidence.
Figure 1 illustrates and describes in detail the execution of the development evaluation described earlier in the actual evaluation settings.
|
FIGURE 1: Stages of co-generation, co-learning and co-validation of evidence and narratives about an ongoing or completed project or programme. |
|
Figure 1 depicts development evaluation as a multistaged process of co-generation, co-learning and co-validation of evidence and narrative about the development project or programme. These stages are described further in the text and highlight the roles of the external evaluation expert(s) and participants.
The first stage involves initiating the evaluation and engaging an external evaluation expert. As evaluation is a social activity initiated by and involving people to realise specific goals, any project or programme stakeholder (especially managers, implementers, funders, beneficiaries, and community or government leaders) has the right to initiate a mid-term or end-line evaluation of the project or programme by notifying fellow stakeholders. After that, the initiating stakeholder identifies and engages experienced external evaluation expert(s) to facilitate the inquiries about, valuing and judging some or all aspects of the intervention. The initiating stakeholder and the external evaluation expert(s) must agree on the evaluation purpose(s). The commissioned external evaluation expert serves as the lead and facilitator of the evaluation process. In line with the agreed evaluation purpose, the external evaluation expert must facilitate: (1) the generation and assessment of evidence on performances in keeping promises made under the project or programme inputs, process and products; (2) the generation of evidence on possible and actual failures in implementing and managing the project or programme, and support the devising of preventive and corrective measures; or (3) the generation of evidence on aspects of the project or programme enables participants to co-learn and co-produce credible history of the completed project or programme.
The second stage involves mapping out and selecting legitimate stakeholders. The legitimate stakeholders include project or programme managers, implementers, funders, beneficiaries, and community or government leaders. These stakeholder groups are presented as Stakeholder Groups A, B, C and D in Figure 1. The external evaluation expert selects representatives of the Stakeholder Group based on their knowledge of and experience with the intervention and invites them to participate in dialogues in their respective homogenous stakeholder groups. Three homogenous groups can be established for each stakeholder group. These are presented as Homogenous Stakeholder Group A 1–3, Homogenous Stakeholder Group B 1–3, Homogenous Stakeholder Group C 1–3 and Homogenous Stakeholder Group D 1–3 in Figure 1. Membership in each homogenous group will have a maximum of seven participants.
The third stage involves conducting inquiries about, valuing and judging aspects of the project or programme based on the perspectives of individual participants and using their preferred methods and tools. The external evaluation expert facilitates each participant to share their perspectives and evidence on aspects of the project or programme they have experienced. The expert evaluator must interrogate participants’ experiences with specific aspects of the project or programme and engage with their ways of knowing, valuing, measuring and validating evidence to understand their evidence and narratives about aspects of the project or programme better. She or he must not impose assessment criteria or standards but allow them to use their preferred criteria and standards to judge aspects of the project or programme. Each participant presents evidence on promise keeping, preventive or corrective measures, and versions of events for others to quash, improve or validate. This interaction enables participants to learn from each other and reach a consensus on the most plausible evidence, the most appropriate preventive or corrective measures, and the most dependable versions of the events, considering their shared experiences as members of the same stakeholder group.
The fourth stage involves sharing and debating evidence on promise-keeping, preventive or corrective measures, and versions of events established in the homogenous groups. Figure 1 indicates the establishment of Intra-Homogenous Groups A, B, C and D. The expert evaluator invites representatives of the homogenous groups to join their respective intra-homogenous groups to present and debate evidence of promise-keeping, recommended preventive or corrective measures, and versions of events related to the project or programme co-generated and co-validated in their respective homogenous groups. Expert evaluators facilitate members in these intra-homogenous groups to critique, improve and validate presented evidence, measures and narratives. In doing so, they learn from each other and agree on the most plausible evidence, the most effective preventive or corrective measures, and the most dependable versions of the events for them as members of one stakeholder group. Each intra-homogenous group generates plausible evidence on various aspects, preferred preventive or corrective measures, and versions of events related to the project or programme to be presented and debated in the heterogeneous groups.
The fifth stage covers the sharing and debating evidence on promise-keeping, preventive or corrective measures, and versions of events in the heterogeneous groups. Members of the heterogeneous groups must have participated in the intra-homogenous groups and must bring evidence, preventive or corrective measures, and versions of events established there. Figure 1 indicates three Heterogeneous Groups, 1, 2 and 3, whose members come from Intra-Homogenous Groups A, B, C and D. Members of the Heterogeneous Groups might quash, improve, or validate generated evidence, used assessment criteria and standards, appropriateness of the suggested measures to prevent or correct failures, and the shared narratives about events related to the development project or programme. In doing this, members learn and appreciate the perspectives of fellow stakeholders and build on their credible evidence and narratives to make evidence-based decision-making on various aspects of the development project or programme. Deliberations on various aspects reached in the Heterogeneous Groups are presented and debated in the Final Stakeholders Dialogue Workshop.
The sixth stage involves dialogues and critical reflections in the Final Stakeholders Group Workshop. Representatives of all stakeholders meet to debate generated evidence and recommend preventive or corrective measures and the versions of events of completed projects or programmes. The evaluation expert facilitates participants to critically evaluate the quality of the presented evidence, the appropriateness of the recommended measures, and the credibility of the shared narratives. Then, the evaluation expert facilitates them to: (1) make evidence-based judgements on the fulfilment of project or programme promises, (2) approve recommended preventive and corrective measures and (3) approve the use of credible narratives in reporting successes, failures and lessons of the completed development projects or programmes.
With the proper management of the above-described evaluation process and diligent facilitation of participants in the homogeneous, intra-homogeneous and heterogeneous stakeholder dialogue groups and the final stakeholder group workshop, co-generation, co-learning and co-validation of evidence and narratives about the development projects and programmes are guaranteed.
Conclusion
Some Swahili proverbs embody philosophical insights that inspire and guide the framing and practice of development evaluation as the people’s activity comprising: (1) inquiries to generate evidence to support evidence-based judgements on performance and the keeping of promises made in development projects and programmes, (2) inquiries to generate evidence to support the devising of measures to prevent failures and correct mistakes in the implementation of the intervention and (3) inquiries to generate evidence to support collaborative learning and co-production of the histories of completed development projects and programmes.
Based on the established philosophical insights, the evaluand is a single but multifaceted phenomenon (ontological belief); knowledge about the evaluand is possible through close and trusted relationships with people who have experienced it (epistemological belief); adequate learning about and production of the credible history of the evaluand are possible in well-established humane relations (axiological belief); and the credible information and evidence on aspects of the evaluand are generated in the well-managed processes of inquiry and assessment (methodological belief). These philosophical beliefs constitute the main content of the Swahili Evaluation Approach, making it philosophically grounded and very able to inspire and guide people-driven inquiries about, valuing and judging aspects of development projects and programmes.
Furthermore, the theoretical and methodological guidelines drawn from the generated philosophical insights adequately and powerfully enlighten development evaluators on: (1) selecting and engaging the legitimate project or programme stakeholders, (2) proper management of evaluation processes, (3) utilising diverse forms of knowing and assessment criteria of representatives of stakeholder groups and (4) facilitating co-generation, co-learning and co-validation of findings and forms of reporting successes, failures and lessons learned.
In brief, the philosophy of evaluation based on wisdom in Swahili proverbs and underpinning the Swahili evaluation approach provides adequate guidance on the what, why and how of conducting development evaluations (Donaldson & Lipsey 2006; Mertens & Wilson 2019; Smith 2008).
Acknowledgements
The author acknowledges the insightful comments and inputs from AfrEA Board Members, TanEA Secretariat, Dr Rose Mbijima, Dr Selestino Msigala, and 61 Swahili-speaking evaluators. This article builds on some content and interpretations generated from the 61 Swahili-speaking evaluators (Mazigo et al. 2024). The author also acknowledges insightful comments and recommendations for improving the manuscript received from Robin Miller, Anne Coghlan, the anonymous reviewers, and the journal editors.
Competing interests
The author declares that he has no financial or personal relationships that may have inappropriately influenced him in writing this article.
Author’s contribution
A.F.M. is the sole author of this research article.
Funding information
Research and writing of this article were made possible by support from The African Evaluation Association (AfrEA) through the AfrEA/USDS Made in Africa Evaluation and the P2P Exercises Project, and The Africa Gender and Development Evaluators Network (AGDEN) through the EvalIndigenous Africa Project Activity.
Data availability
The authors confirm that the data supporting the findings of this study are available within the article.
Disclaimer
The views and opinions expressed in this article are those of the author and are the product of professional research. It does not necessarily reflect the official policy or position of any affiliated institution, funder, agency, or that of the publisher. The author is responsible for this article’s results, findings and content.
References
Abrahams, M.A., 2015, ‘A review of the growth of monitoring and evaluation in South Africa: Monitoring and evaluation as a profession, an industry and a governance tool’, African Evaluation Journal 3(1), 142. https://doi.org/10.4102/aej.v3i1.142
African Evaluation Association (AfrEA), 2007, Making evaluation our own: Strengthening the foundations for Africa Rooted and Africa Led M&E: Special stream statement, viewed 25 August 2023, from https://vopetoolkit.ioce.net/sites/default/files/resources/3.8e3%20PUB_Special%20Stream%20Statement.pdf.
Basheka, B.C. & Byamugisha, A.K., 2015, ‘The state of monitoring and evaluation (M&E) as a discipline in Africa: From infancy to adulthood?’, African Journal of Public Affairs 8(3), 75–95.
Carroll, K.K., 2008, ‘Africana studies and research methodology: Revisiting the centrality of the Afrikan worldview’, Journal of Pan African Studies 2(2), 5–27.
Chilisa, B. & Malunga, C., 2012, ‘Made in Africa evaluation: Uncovering African roots in evaluation theory and practice’, in paper presented at African thought leaders forum on evaluation and development: Expanding thought leadership in Africa, Bellagio, 14–17 November 2012.
Chilisa, B., 2015, A synthesis paper on the made in Africa evaluation concept. Commissioned by the African Evaluation Association (AfrEA), viewed 25 August 2023, from https://afrea.org/wp-content/uploads/2018/02/MAE-Chilisa-paper-2015-docx.pdf.
Chilisa, B. & Mertens, D.M., 2021, ‘Indigenous Made in Africa evaluation frameworks: Addressing epistemic violence and contributing to social transformation’, American Journal of Evaluation 42(2), 241–253. https://doi.org/10.1177/1098214020948601
Chilisa, B., Major, T.E., Gaotlhobogwe, M. & Mokgolodi, H., 2016, ‘Decolonizing and indigenizing evaluation practice in Africa: Toward African relational evaluation approaches’, Canadian Journal of Program Evaluation 30(3), 313–328. https://doi.org/10.3138/cjpe.30.3.05
Chirau, T., Mapitsa, C.B., Amisi, M., Masilela, B. & Dlakavu, A., 2020, ‘A stakeholder view of the development of national evaluation systems in Africa’, African Evaluation Journal 8(1), 1–9. https://doi.org/10.4102/aej.v8i1.504
Donaldson, S.I. & Lipsey, M.W., 2006, ‘Roles for theory in contemporary evaluation practice’, in I. Shaw, J. Greene & M. Mark (eds.), Handbook of evaluation, pp. 56–75, SAGE, London.
Easton, P.B., 2012, ‘Identifying the evaluative impulse in local culture: Insights from West African proverbs’, American Journal of Evaluation 33(4), 515–531. https://doi.org/10.1177/1098214012447581
Gaotlhobogwe, M., Major, T.E., Koloi-Keaikitse, S. & Chilisa, B., 2018, ‘Conceptualizing evaluation in African contexts’, New Directions for Evaluation 159, 47–62. https://doi.org/10.1002/ev.20332
Global Evaluation Initiative (GEI), 2024, Global directory of academic programs in evaluation, viewed 15 March 2024, from https://www.betterevaluation.org/global-directory-of-academic-programs-in-evaluation.
Jeng, A., 2012, ‘Rebirth, restoration, and reclamation: The potential for Africa-centred evaluation and development models’, in paper presented at African thought leaders forum on evaluation and development: Expanding thought leadership in Africa, Bellagio, 14–17 November 2012.
Mazigo, A.F., 2015, Towards an alternative development ethic for the fishing sector of Ukerewe District, Tanzania, Doctoral dissertation, Stellenbosch University, viewed viewed 25 August 2021, from http://hdl.handle.net/10019.1/96739.
Mazigo, A.F., 2021, ‘African humanism and eradicating extreme poverty worldwide’, Utafiti 16(1), 124–140.
Mazigo, A.F., Mwaijande, F., Nguluki, I.M. & Mkombozi, M., 2024, ‘Swahili wisdom for shaping development evaluation practices’, African Evaluation Journal 12(2), a738.
Mbava, N.P. & Chapman, S., 2020, ‘Adapting realist evaluation for made in Africa evaluation criteria,’ African Evaluation Journal 8(1), a508. https://doi.org/10.4102/aej.v8i1.508
Mertens, D.M. & Wilson, A.T., 2019, ‘Program evaluation theory and practice’: A comprehensive guide, Guilford Publications, New York, NY.
Methali, n.d., 1500 za Kiswahili na Maana Zake, viewed 15 November 2020, from https://www.msomibora.com/2021/03/methali-1500-na-maana-zake.html.
Methali za Kiswahili, n.d., viewed 15 November 2020, from https://www.mwambao.com/methali.htm.
Morra-Imas, L.G. & Rist, R.C., 2009, The road to results: Designing and conducting effective development evaluations, World Bank Publications. Washington, DC.
Muwanga-Zake, J.W.F., 2009, ‘Building bridges across knowledge systems: Ubuntu and participative knowledge paradigms in Bantu communities’, Studies in the Cultural Politics of Education 30(4), 413–426. https://doi.org/10.1080/01596300903237198
Norris, N., 2015, ‘Democratic evaluation: The work and ideas of Barry MacDonald’, Evaluation 21(2), 135–142. https://doi.org/10.1177/1356389015577510
Organisation for Economic Co-operation and Development (OECD), 1991, Principles for evaluation of development assistance, viewed 25 August 2023, from https://www.oecd.org/development/evaluation/2755284.pdf.
Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), 10–18. https://doi.org/10.4102/aej.v1i1.25
Schwandt, T.A. & Gates, E.F., 2021, Evaluating and valuing in social research, Guilford Publications, New York, NY.
Smith, N.L., 2008, ‘Fundamental issues in evaluation’, in N.L. Smith & P.R. Brandon (eds.), Fundamental issues in evaluation, pp. 159–166, Guilford Press, New York, NY.
Swahili Proverbs, n.d., viewed 15 November 2020, from http://swahiliproverbs.afrst.illinois.edu/introduction%20to%20listing.html.
United Nations Development Programme (UNDP), 2009, The handbook on planning, monitoring and evaluating for development results, United Nations Development Programme, New York, NY, viewed 23 February 2024, from http://web.undp.org/evaluation/guidance.shtml#handbook.
|