Abstract
Background: Most evaluation in Africa is rooted in dominant neoliberal Western approaches. Imported Western evaluation frames may lack multicultural validity and can lead to wrong conclusions and poor development outcomes. They may also reinforce subjugation and cultural hegemony through neo-imperialism and colonisation of the imaginations of those concerned. The Made in Africa Evaluation (MAE) concept has received attention in recent years as a way to address this challenge. As a relatively nascent construct, however, interested scholars and professionals continue to seek to define and operationalise MAE more effectively.
Objective: The objective of this study is to provide a working definition of MAE.
Methods: We used the Delphi technique to solicit informed views from expert evaluators working in Africa. We interviewed two additional experts to triangulate and test the validity of those findings. We also tested the Delphi derived definition of MAE through the analysis of six illustrative evaluation reports. Finally, we asked the same panel of experts to complete a survey aimed at clarifying next key steps to advance the construct.
Results: The results of our efforts to elucidate a concise definition of MAE yielded the following definition: Evaluation that is conducted based on African Evaluation Association (AfrEA) standards, using localised methods or approaches with the aim of aligning all evaluations to the lifestyles and needs of affected African peoples whilst also promoting African values.
Conclusion: We posit that this working definition, however tentative, has the potential to influence the practice, study, and teaching of evaluation in Africa.
Keywords: Made in Africa Evaluation; indigenous evaluation; culturally responsive evaluation; African evaluation; culture-centered evaluation.
Introduction
The field of evaluation in Africa is at a critical juncture as it faces new scrutiny and questions about its responsiveness to context and its sensitivity to the needs and realities of the continent’s populations (Chilisa & Mertens 2021:241–253). Program evaluation defined by Fournier (2005) as:
[A]n applied inquiry process for collecting and synthesizing evidence that culminates in conclusions about the state of affairs, value, merit, worth, significance, or quality of a program, product, person, policy, proposal, or plan. (pp. 139–140)
Program evaluation is playing an increasingly important role in the international development landscape (Mertens & Wilson 2012). Governments, non-governmental organisations (NGOs), and bilateral institutions have increasingly encouraged the evaluation of their programmes, policies, and interventions. However, recent analyses of both the political economy of evaluation in Africa and its attendant methodologies and approaches (Chilisa 2012, 2015) have highlighted power differentials that influence practice and have raised questions including:
- Who sets the agenda for what should be evaluated, and how?
- Which evaluation firms and evaluation consultants are hired?
- Which evaluation questions and evaluation methodologies are used? and
- Whose knowledge counts?
Chilisa (2015) and members of the African Evaluation Association (AfrEA 2007) have employed these questions to prompt the development of a Made in Africa Evaluation (MAE) framework. However, this concept has been used by key actors in conferences, the literature, and other venues without an agreement on its meaning. For example, in 2013 during AfrEA’s conference in Yaoundé, Cameroon, there were very many views of MAE espoused by thought leaders without a common agreement on its meaning. In her landmark 2015 synthesis paper, Chilisa explored the concept’s history, meaning, and application by examining the consensus (and dissensus) amongst some expert evaluators in the field. This article, commissioned by AfrEA with support from the Bill and Melinda Gates Foundation, picked up the thread from Chilisa and Malunga’s (2012) Bellagio conference paper on the same topic. She discussed, for example, the centrality of relational epistemology, methodology, and axiology in MAE, as well as the importance of context.
Chilisa (2015) moved the field towards conceptualising MAE to hedge against the proliferation of different conceptualisations of the idea, using a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis as her methodology. Her synthesis paper yielded notable results, one of which was the identification of potential ways forward for the MAE concept in Africa. Further, Chilisa’s (2015) work posited that MAE should challenge the current practice of designing evaluation tools that do not pay attention to contexts in Africa and to recognise and promote African diversity manifesting itself through different cultures, customs, languages, histories, and religions. MAE must challenge the current evaluation practice that leaves stakeholders wondering how exactly the community is benefitting from evaluation. It must challenge the evaluation that shows great successes of an intervention whilst the reality on the ground is entirely different. It must also question the marginalisation of African data collection tools like storytelling, folklore, talking circles, music, dance, and oral traditions.
She further asserted that MAE must be a tool for development. It should address the gap between the way we think development works and the way evaluation is done. To address this, evaluators should be more open about African peoples’ beliefs and values about what constitutes development in Africa (Chilisa 2015). Another view from Chilisa’s findings is that MAE has knowledge contribution from African history, Political Science, Anthropology, Sociology, African Philosophy, African Oral Literature, and African Knowledge Systems. This makes it a transdisciplinary concept. Her study also established that most evaluation experts interviewed agreed that worldviews and paradigms about the nature of reality, knowledge, and values of the African people should constitute MAE methodology.
She also signalled some discord and unresolved questions regarding the contours of the concept, due in part ‘to whether scholars can originate evaluation practices and theories rooted in African world-views and paradigms and indeed if African paradigms exist’ Chilisa (2015:27). As such, her effort stopped short of offering a concise definition of MAE. This study sought explicitly to build on Chilisa’s foundational work to contribute to MAE’s refinement and to ascertain the extent to which it is gaining acceptance and prominence amongst those engaged in evaluation efforts across Africa. Theoretically, this study was informed by a postcolonial critique of the development project and neoliberalism (Fanon 1965; Harvey 2007; Tiffin 1995). Our analysis also drew on decolonising and indigenous methodologies as used in research and evaluation (Chilisa 2012; Cloete 2016).
To further argue for the need for MAE, Uwizeyimana (2020) promoted the need for Africa-rooted evaluation and recognises the African ubuntu philosophy as the bedrock for Africa-rooted evaluation. The ubuntu philosophy is the heart-felt connection and interconnectedness between the African people. It expresses how Africans own and do things in a collectively rather than individually. This philosophy is known by different names in different African countries and cultures. For example, it is known as botho to the Sotho and Tswana people of Lesotho, Botswana, and South Africa. The Yoruba people of Nigeria exemplifies this philosophy as ajobi and ajose philosophies (Omosa 2016). In arguing for mainstreaming of the ubuntu philosophy, Uwizeyimana (2020) opined that even though the philosophy is integral to the concept of MAE, it has weaknesses and serious consequences for African evaluation, which must be addressed for it to effectively become the bedrock of Africa-rooted evaluation.
We postulate that MAE represents an alternative to the Western-centric epistemologies and ontologies that characterise the neoliberal ‘development project’ (McMichael & Weber 2020). Many critiques of neoliberalism in development have examined its failure in Africa through the lens of postcolonialism (Lundgren & Peacock 2010). Postcolonial indigenous theory and decolonising and indigenous methodologies present a worldview that deconstructs neoliberal ‘truths’ and norms that have heretofore been presented as normal and natural, showing them instead to be colonising and inequitable (De Sousa Santos 2018; Tamale 2020). Informed by this framing, this study addressed the following research questions:
How do thought leaders in the African evaluation field define MAE?
How are MAE principles operationalised and presented in evaluation reports?
What next steps do African evaluation thought leaders believe are necessary to advance the MAE concept?
In the remainder of this article, we present the methods, results, and implications of our empirical research aimed at addressing the research questions. Whilst the field of evaluation has benefited from an increase in conceptual literature on MAE in recent years (Chilisa & Mertens 2021), we posit that the social and scientific value of this article derives from the fact that the research reported here takes an empirical approach designed to help the field move towards a clearer conceptualisation and definition of MAE, thereby positioning the concept for further uptake, use, and study.
Methods
Study design
This multiple methods study employed a Delphi technique, interviews, prioritisation of actionable items by the same panel of expert used for the Delphi technique, and document analysis of evaluation reports. The Delphi approach involved two rounds of online surveys and analysis of participants’ statements (Hsu & Sanford 2007). In addition to the Delphi process, our Delphi participants completed an online questionnaire to garner additional data on topics such as participants’ perception of the needed next steps in the process of developing the MAE concept that extended beyond the data collection format used in the Delphi portion of the study. Then, we interviewed two additional experts. These in-depth interviews with two additional evaluation experts strengthened our Delphi-related results. Finally, we reviewed evaluation guidance documents and reports to seek evidence of whether and how the aspects of MAE were operationalised therein. Figure 1 is a schematic diagram of the various methods used in this study and how they relate to each other.
|
FIGURE 1: Methodological schematic showing methods used in this study and the relationship between them. |
|
Multiple methods generally strengthen research designs because specific strategies have both strengths and weaknesses (Brewer & Hunter 2006). Mertens (2008) argued that using multiple methods helps in developing credible and accurate measurements and can increase validity. It achieves this by triangulating sources and capitalising on the strengths of each method employed (Creamer 2017).
The Delphi technique is an iterative survey method, developed by the RAND Corporation to systematically solicit informed opinions from participants within their domain of expertise and knowledge base (Helmer-Hirschberg 1967; Hsu & Sanford 2007). More specifically, according to Hsu and Sanford, the Delphi technique ‘is a widely used and accepted method for achieving convergence of opinions concerning real-world knowledge solicited from experts within certain topic areas’ Hsu and Sanford (2007:1). To implement the Delphi method, multiple rounds of questions based on a list of statements about the topic at hand are sent to an expert panel, who rate and add to the statements. The researchers then incorporate and synthesise the first round of expert panel responses to yield new statements and initial analyses which are then returned to the experts for further input.
We selected a Delphi analysis to address our first research question for several reasons. Firstly, the iterations embedded in use of the Delphi technique made it possible to build consensus or dissensus (Hsu & Sanford 2007) amongst those we surveyed concerning the MAE concept. The method’s feedback process provided an opportunity for the experts involved in our Delphi process to reassess their initial judgements. Secondly, the approach is well-suited to gather detailed data from experts in a way that promotes their broad participation because expert respondents could be located anywhere geographically. Finally, the use of the Delphi technique as a temporally flexible approach allowed participants time for reflection concerning their responses and therefore helped to reduce pressure on them (Dalkey 1972; Hsu & Sanford 2007).
We purposively selected 17 prospective participants. We reached out to those individuals using publicly available email addresses, and seven of the 17 agreed to participate in the Delphi portion of this study. Two additional individuals agreed to an in-depth interview; their comments and insights added validity to our findings. For both the Delphi phase and the interviews, we selected from amongst a group of potential participants who met the following criteria: (1) Top management bureaucrats, who are evaluators or evaluation commissioners in African governments, multilateral intergovernmental organisations (e.g. United Nations International Children Emergency Fund [UNICEF]), NGOs, and bilateral development entities in Africa; (2) African evaluation thought leaders, based on their work with AfrEA and previous championing of MAE; or (3) Have conducted evaluation research and have written explicitly or indirectly about MAE in their publications. Additionally, we required that invited participants have had at least 10 years’ experience in evaluation research or practice.
We used email to invite and subsequently communicate with respondents for this online study. A follow-up email was sent at the end of five business days to remind individuals who had not replied to their initial invitation to respond. We sent that reminder twice, using a modified approach to survey reminders as proposed by Dillman, Smyth and Christian (2014). When we had recruited seven participants, enough to form our panel of experts, we sent an introduction letter, the first round of a web-based Qualtrics survey and a consent form to all who had indicated interest. We sent a reminder email at the end of five business days urging participants to complete the survey. We gave participants 10 business days to complete that effort, after which we began to analyse the data provided by the surveys. We obtained participants’ email addresses through publicly available databases, such as the Voluntary Organization for Professional Evaluators (VOPEs), other published materials, and the AfrEA website.
Delphi method
The first round Delphi survey provided the expert panel a list of 10 statements describing MAE. To construct those statements, we identified prominent and common concepts that previous authors had employed in the salient literature to describe MAE. We list those statements, shown as S1 to S10, and their sources in Table 1. We sought in this round for participants to rate the relative importance of the derived MAE descriptors on a scale of one (least important) to six (highly important).
TABLE 1: Statements and their sources rated in Delphi questionnaire round one. |
In addition to completing their importance rankings for each descriptive statement, we asked each respondent to provide up to five additional descriptors that, in their view, described MAE, but that were not captured in the original 10 depictions. These additional statements were then included in subsequent rounds (indicated as B1, B2, etc.) to also be rated by the rest of the expert panel. The statements rated in the second round are a combination of statements where there were dissensus and additional statements suggested in round one. The second round followed the same procedure for round one.
Developing consensus criteria for both rounds
We defined respondent consensus as the extent to which individual scores demonstrated agreement concerning an item’s level of importance (Vo 2013). More specifically, we calculated the variance of ratings for each statement as well as the average variance amongst all descriptions evaluated. For this study, we defined consensus as having been attained when the variance for a statement was less than the average variance of all descriptors judged in that round. Conversely, we judged that disagreement remained amongst our respondents when an individual statement’s variance was greater than the average variance of all of the descriptions evaluated. Statements with very low variance or deviation from the mean suggested consensus. We constructed a two-by-two matrix to plot the relative mean scores and variance scores for all statements (see Figure 2). Statements found to have high consensus would then appear in quadrants I and II in Figure 2. We included statements on which disagreement remained in a second survey for re-rating by our expert respondents. Round two followed the same process of analysis to determine the level of importance attached to each remaining statement by our study participants.
|
FIGURE 2: Possible categories of statements with respect to averaged mean and variability. |
|
Developing a working definition of Made in Africa Evaluation
After analysing both rounds of surveys, we noted the final mean and variance values for each statement on which our respondents reached consensus. For example, the final mean value for S8 at the end of the second round was 5.20 (data and results are shared in full in the Results section, below). We also plotted the final mean value of each consensus statement against the final variance value in a scatter plot diagram. Quadrants I and III in Figure 2 contain statements with high mean values and hence, high importance in our respondents’ view. Meanwhile, quadrants I and II contain statements with low variance values, and therefore, consensus. More importantly, quadrant I offers statements with high mean and low variance values. In other words, panellists reached consensus and accorded these descriptors a high level of importance in both survey iterations.
We performed a content analysis on the Quadrant 1 statements. Two coders jointly selected a central theme for each statement and employed those as codes for each (Corbin & Strauss 2015; Glaser & Strauss 1967). Taken together those constructs constituted the elements used to elucidate a working definition of MAE, shared in the results and discussion sections.
Interviews
To triangulate and augment the validity of the findings from the Delphi portion of our analysis, we interviewed two additional African evaluation experts. These participants agreed to an individual interview to share their perspective on the MAE concept, also because they were not available to participate in the full-fledged Delphi study. We conducted a semi-structured online (via video Skype) interview with each individual. Whilst a larger sample of interviewees would have added still further nuance to the study, we appreciate the triangulation and thick description provided by even these two in-depth interviews.
We transcribed the two audio recordings in Microsoft Word 2010. Following Corbin and Strauss (2015) and Glaser and Strauss (1967), we undertook whole text analysis of our transcripts. Our participants expressed emotions and we paid attention to their tone of voice and emphasised phrases. We read each transcript twice and identified text relevant to our research questions by highlighting and noting it in the margin. Each segment of text comprised an excerpt, and excerpts consisted of one or more sentences or paragraphs. A full sentence was the smallest unit of analysis. In the data corpus, whenever two or more excerpts communicated the same information, we included only one in our analysis.
We scrutinised each excerpt and assigned one or more codes to capture its meaning. We compared and contrasted each code with the others we had assigned to identify distinctive properties. We then organised the resulting codes into a list and sought to develop categories that could encompass more than one code. Finally, we examined the contents of each category for coherence. We continued this process until we were satisfied that each of our categories was unique. The results of the interview process are presented in the Results and Discussion section, alongside the rest of the broader study’s results.
Document analysis
We asked our Delphi survey respondents to suggest links to reports they had written or of which they were aware that employed the MAE concept to address our second research question. This method was used to help address Research Question 2, How are MAE principles operationalised and presented in evaluation reports? However, when our participants did not suggest any reports, we purposively selected six evaluations reports from the databases of recognised evaluation funders and commissioners that potentially provided evidence of applying the MAE concept. We examined evaluation reports from the archives of the United States Agency for International Development (USAID) and the UNICEF. We used concept mapping by selecting reports that incorporated culturally responsive evaluation (CRE) and MAE concepts like indigenous evaluation methods, participatory evaluation, localised knowledge, challenging western views, and power relations. We next present and examine those reports in the light of our new working definition. For example, we selected a USAID report, ‘Evaluation of the TCE program in Mpumalanga and Limpopo’, and UNICEF report, ‘Real-time Evaluation of UNICEF Somalia Country Office (SCO) Humanitarian Response to the Pre-Famine Crisis in Somalia’ as part of the six evaluation reports.
Using a document analysis (concept mapping) approach (eds. Canas et al. 2008), we pilot tested our newly-developed MAE definition to analyse the six selected evaluation reports. We expected that some reports might address more than one central element of our working definition. We included reports as evidencing salient elements of the definition if they contained concrete examples of those issues. We highlighted specific sentences in reports that addressed elements of our draft MAE definition.
More specifically, using concept mapping, we read through each document at least two times, looking for evidence of the central themes of the MAE definition we had derived from our Delphi participants. We also took note of each report’s overall evaluation design/methods approach. For example, a central concept in the MAE definition is the promotion of African values. We placed any evaluation that suggested that it aimed to promote African values and that offered specific examples concerning how it had sought to do so, under the category ‘African Values’. The results of the document analysis process are presented in the Results and Discussion section, alongside the rest of the broader study’s results.
Actionable items prioritisation methodology
To address our third research question, in addition to asking our study participants to rate MAE related statements in order to enable us to develop a working definition of the construct, in a separate survey, we also asked our experts to evaluate the importance and feasibility of 12 actionable items to further develop and promote the MAE concept, as enumerated by Chilisa (2015). Because Chilisa presented these steps originally to chart a possible path forward for MAE, we used our empirical study as a way to build on and extend her 2015 work. Chilisa’s action steps are represented in Table 2 by statements W1 to W12. Note that this is not part of the Delphi technique, but rather was a separate survey administered to the same sample or experts who participated in the Delphi portion of the study.
TABLE 2: Twelve actionable statements rated by this study’s Delphi participants. |
We used Microsoft Excel to calculate the means for each of the statements concerning the criteria we asked our respondents to employ to evaluate each. We created a slope graph, which is presented in the Results and Discussion section, depicting the relationship between the assigned mean scores for the level of importance and the level of feasibility of the 12 actionable items.
Results and discussion
Results from the Delphi process
Panellists rated the importance level of a total of 15 statements as a part of the Delphi process that addressed a range of issues linked to the MAE concept. In the end, the panellists ranked four statements (S5 S7, S8, and B3) as most important, as shown in Table 3. As a reminder, the ‘S’ statements were from the original Delphi round’s prompt, whilst ‘B’ statements were generated by expert participants themselves and then included in Round 2. Because our first research question was to define the MAE concept more effectively, we derived the central theme of each of the four statements and used those descriptors as codes for each (Corbin & Strauss 2015; Glaser & Strauss 1967).
TABLE 3: Statements panellists rated as important at the end of the study and their statements codes. |
Each idea presented above is considered central to each of the statements. Taken together, these animating ideas as represented in the central ideas/codes form a working definition of MAE:
Evaluation that is conducted based on AfrEA standards, using localised methods or approaches with the aim of aligning all evaluations to the lifestyles and needs of affected African peoples while also promoting African values.
This new definition aligns with Cloete and Auriacombe (2019) in their critique of Africa-rooted evaluation as a good example of decoloniality. They portrayed that Africa-rooted evaluation as an evaluation approach that promotes African practices and aligns with the African identities and conditions. Furthermore, one aspect of our new definition of MAE is using localised African approaches with the aim of aligning all evaluations to the lifestyles of the African people. This aspect further strengthens the argument for the ubuntu concept as the main fabric of MAE as elucidated by Chilisa and Malunga (2012), Cloete and Auriacombe (2019), and Uwizeyimana (2020). These authors presented ubuntu as epitomising the sense of community, collectiveness, and love amongst the African people, which is consistent with our new definition that places premium on aligning evaluation to the lifestyle and needs of African people whilst promoting African values.
Results from the interview process
Seven central themes emerged in our interviews with the two evaluation experts: (1) The importance of guidelines when conducting MAEs; (2) Importance of research concerning MAE; (3) Cultural competency and MAE; (4) Further research on localised methods; (5) The integration of international practice and AfrEA standards in MAE; (6) The relevance of CRE in MAE; and (7) The way forward for the MAE concept. For example, when asked if satisfied by the consensus definition of MAE we developed on the basis of our Delphi process respondents’ views, one interviewee emphasised the need to probe further on what constitutes ‘localised knowledge’. In the interviewee’s view, such knowledge must be incorporated in ways beyond citing and using examples of localised methods:
‘If we look at methods, as you said, but [we should also ask] what is underneath the method. Why is storytelling in Africa more important than elsewhere, why do people believe in sharing [in Africa] more than, you know, different ways [of thinking about that value] used elsewhere? Why are African courts different? The judgments, what is underneath? How do Africans see judgments, and so on? We need to ask deeper questions, and so that is why I think it is important that we do engage philosophy in values.’ (Interviewee 1)
In short, this interviewee contended that there is a need to understand further the philosophy that undergirds localised knowledge to deepen awareness of its implications for MAE.
Furthermore, it is important to highlight that these seven themes align with the themes the Delphi panellist rated as important as discussed above. For example, one of the central themes from the interviewees is the integration of AfrEA standards in MAE. This theme aligns with conducting evaluation studies that are consistent with evaluation standards developed and used by the AfrEA. Also, the central theme that is focused on the relevance of CRE in MAE aligns with adapting evaluation work to the lifestyle and needs of the African communities, where evaluation is conducted. This alignment shows a convergence between the central themes from the interviewees and the statements rated as important to the Delphi panel.
Results from the document analysis
The second research question of this study was to explore how MAE principles are operationalised and presented in evaluation reports. To illustrate the presence and distribution of each theme in each report we used a concept map, which appears as in Figure 3. The Figure suggests that the six evaluation reports align with AfrEA’s evaluation standards and guidelines, by showing evidence of being realistic, prudent, serving the needs of the intended stakeholders, and conducted legally and ethically. These reports also align with the needs of the African people by showing evidence that they are meeting the needs of the African people. Furthermore, African values were evident and promoted in reports 2, 3, 5, and 6, whilst evaluators employed localised methods in numbers 2, 3, and 6. For example, report 2 showed the evidence of employing localised methods by using focus groups that comprised traditional leaders/Indunas; traditional healers; youths, and others employed as part of the methodology. Additionally, report 2 showed evidence of promoting African values by targeting traditional leaders/Indunas and healers as part of the methodology for the evaluation.
|
FIGURE 3: A concept map showing the presence and distribution of each theme in each report. |
|
Results from the actionable items prioritisation process
For the third research question, the panellists considered Chilisa’s 2015 12 actionable items (represented by W1 -W12), which she presented as way posts for refinement of the MAE concept. However, only statements W4 (fund research on MAE and evaluation that may be used as a test case for MAE) and W10 (review AfrEA guidelines in the light of the MAE approach) stood out for our respondents. These two statements have high mean scores for both their levels of importance and feasibility. These findings appear in Table 4 and Figure 4. Statement W4 has a mean score of 4.43 and 4.00 for the level of importance and of feasibility, respectively, whilst statement W10 has the same mean score of 4.29 for both its perceived level of importance and feasibility.
TABLE 4: Twelve statement and the summary statistics for Chilisa’s way forward for Made in Africa Evaluation. |
|
FIGURE 4: Slope graph showing relative average importance and feasibility ratings of statements. |
|
Figure 4 is a slope graph that depicts the difference in mean scores that our respondents assigned for importance and feasibility for each of Chilisa’s 12 action items. In this slope graph, both the mean scores for the level of importance and the level of feasibility of statement W10 are the same, with a mean of 4.29 for both variables. This is represented in the straight orange horizontal line and this high mean score shows that the panellists consider the statement as important and feasible. In statement W4, we can see a difference of 0.43 between the mean scores of the level of importance and the level of feasibility. These high mean scores, 4.43 and 4.00 for the level of importance and feasibility, respectively, show that the panellists also see the statement as important and feasible. This is represented in the yellow line (sloping down from left to right).
Implications
The first result of this study, in response to Research Question 1, is the newly elucidated concise definition of MAE, which in turn lends itself to a number of other implications for evaluator training and capacity building, evaluation practice, evaluation policy, and research on evaluation. We address each of these implications next.
Evaluator training
As shown in our new definition of MAE introduced earlier resulting from our empirical study presented in this manuscript, the recognition of AfrEA and other relevant Volunteer Organizations for Professional Evaluation (VOPE) guidelines, the use of localised knowledge and approaches, the increased consideration of the lifestyles of populations of interest, and the promotion of African values are central to the concept of MAE. Previous efforts have sought to expand the field by teaching evaluation competencies to ensure that would-be evaluators possess necessary technical skill-sets (Thomas & Madison 2010). However, beyond acquiring such competencies, our findings illuminate the need for African evaluators to become deeply aware of African philosophies and values, as revealed across the continent. These thoughts were also alluded to elsewhere. Cram (2018) opined that as a way of becoming responsive and adapting their practice to African cultures and values, evaluators must seek to acquire adequate knowledge and become more aware of African values (Cram 2018). Cram (2018) encourages partnership between evaluators and the tribal members. Cram argues that evaluators should seek advice and feedback from tribal members in the evaluation process, which will further deepen their knowledge of the African cultures and values.
Another example is the philosophy of ubuntu introduced earlier (‘I am because we are’). This philosophy is woven through the fabric of many African cultures and communities. In such communities, no single person can claim to speak for the entire community (Cloete & Auriacombe 2019; Uwizeyimana 2020). A similar philosophy is ingrained in West African culture. For example, in the Yoruba culture amongst the people of Nigeria and other West African countries, they embrace ajose and ajobi, a view that prizes collectivism and not individualism (Omosa 2016). Whilst there are many more examples, paying attention to the promotion of African values and the use of localised knowledge as illuminated by our findings appears vital in the training of young and emerging evaluators (YEEs) in Africa if the evaluators are to realise the aims of MAE.
Evaluation practitioners in Africa should be trained to prioritise the promotion of African values and increased consideration for the lifestyles of the African people in their understanding of the theory and practice of evaluation. If this type of training is encouraged, it will reduce the influence of the Eurocentric models of evaluation which continue to deny the important place of Africa’s rich history, context, and philosophy in evaluation.
Evaluation practice
Our finding from the third research question which is to review AfrEA guidelines in light of the MAE approach corroborates the need, expressed elsewhere, to review current AfrEA guidelines in the light of evolving definitions of MAE. This can potentially enhance MAE and, ultimately, yield better evaluation practice in Africa (Chilisa 2015). For continuous growth and development in any field, there is a need to revisit foundations and guidelines that constitute the field of practice and improve on them continually. The governing board of AfrEA might consider reframing AfrEA guidelines to align them with the current thinking on MAE.
Research on evaluation
As with every good nascent and emerging concept, the MAE will continue to be enriched. It will continually be shaped and framed by different perspectives and thinking so that we can start seeing changes in practice. One key finding from this study is the need for further research to operationalise localised methods and approaches. For example, what are specific examples of localised methods or approaches? What are the implications of methods involving storytelling, local courts, campfires, and proverbs? Also, what are the ways to actively represent and recognise these approaches in evaluation reports? Chilisa (2012, 2015) and Cloete (2016) have contributed much to the exploration of these terms and approaches. However, the need remains for further research along these lines.
Study limitations
Delphi methodology conveys important advantages, but it also has its limitations. Questions are often raised about the accepted sample size for a good Delphi study. Also, because the Delphi methodology is iterative and sequential as a result of the layered feedback process integral to the concept and use of it, some uncertainty can arise about the process when the sample size drops during the study because of participant attrition. Notably, in this study, because of personal and other issues beyond their control, two panellists had to be excused during the second round of the survey, and this reduced the number of panellists from seven to five.
However, it has been empirically established that the sample size has minimal impact on the quality of data during a Delphi study. What is most important in a Delphi study is the level of training and knowledge of panellists about the subject matter. In particular, Akins, Tolson and Cole (2005) established that response characteristics are stable for a small expert panel. In other words, there is stability in response characteristics irrespective of the sample size. One final methodological quandary related to our use of the Delphi method is that it itself is not a Made in Africa approach. It is established by Western epistemological ontological, and methodological assumptions. Yet, whilst some may find it ironic to study African methodologies using a non-African method, we maintain that the tool was appropriate for this type of study, that is, helping to arrive at expert-based consensus. Also, further support of this view is evident in the argument of Cloete (2019) in his critique of coloniality. He argued that to totally reject Eurocentric research and evaluation approaches is rigid and totally misplaced. Instead, African evaluators must acknowledge the importance and validity of western research and evaluation approaches whilst using them as a supplement to indigenous evaluation approaches.
In addition to the established findings discussed above, this study included interviews with two other stakeholders who champion MAE. These interviewees were initially scheduled to be part of the Delphi panellists but opted out because of their busy schedules. These interviews provided additional perspective on the findings from the Delphi. The participants interviewed did not only offer their understanding and definition of the concept, but they also offered a critical viewpoint of the consensus definition developed from the Delphi.
Additionally, six reports were sampled to address the second research question, which are not a comprehensive reflection of all evaluations on the continent. As such, claims about the mainstreaming of MAE concept in Africa may not be robust. However, it is sufficient to address the question because the main thrust of the question is to test-run the developed consensus definition of MAE and explore some illustrative ways in which evaluation in African aligns with the principles of MAE.
Conclusion
This article’s primary contribution to the field is a working definition, although tentative, of MAE, which other practitioners and scholars are invited to further test and apply. We posit that the definition shared in this manuscript is a significant accomplishment in evaluation theory in Africa, which will, in turn, influence the practice on the continent. Beyond coming up with a definition of MAE, which is a critical step in evaluation theory and practice in Africa, the evidence presented above points to the need for the concept of MAE to be mainstreamed by making sure that it gains acceptability, prominence, and wider use amongst African evaluators. This can be one step in generating new possibilities for praxis in the face of the dominant power-knowledge assemblages that characterise postcolonial contexts.
It is important to note that this study made a step towards this by investigating how the concept is presented and operationalised in evaluation reports. Additionally, from the study, the panel of experts prioritised the next level for the concept in Africa which also move the concept towards its mainstreaming. However, even though these are important considerations for mainstreaming the concept in evaluation practice, there is a need for further research that will ingrain and mainstream the concept and make sure that it gains wider coverage, acceptability, prominence, and use in the African continent. Lastly, as with every emerging concept, it is expected that the findings from this investigation will contribute to improving evaluation theory and practice in Africa, although they will also require further critical testing and feedback. Insights gained from future research on the MAE concept will inform the needed efforts to more clearly describe and articulate the concept, enrich the discipline and ultimately improve practice and policymaking.
Acknowledgements
We completed this research with important collaborations, and we want to thank the experts (Bagele Chilisa, Fanie Cloete, Zenda Ofir, Florence Etta, Ziad Moussa, Denis Jobin, Donna Podems, and Benita Williams), all of whom consented to have their names published here. They freely gave their time, counsel, and insights and we thank them very much. Our conclusions are our own and should not be attributed to these individuals.
Competing interests
The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.
Authors’ contributions
This is the dissertation study of O.O., who contributed to the conceptualisation, methodology, formal analysis, investigation, writing original draft, visualisation, validation, data curation, resources, and editing of this manuscript. The co-authors T.A., K.N., M.S. and J.A. all contributed to the conceptualisation, project administration, validation, editing, and supervision of the manuscript.
Ethical considerations
This study was approved by Western Institutional Review Board (WIRB) (Protocol number: 18-640). This article followed all ethical standards for research without direct contact with human or animal subjects.
Funding information
The authors would like to thank the Virginia Tech Open Access Subvention Fund for their support of the publication of this article.
Data availability
The authors confirm that the data supporting the findings of this study are available from the corresponding author, O.O., upon reasonable request. The data are not publicly available because of restrictions, for example their containing information that could compromise the privacy of research participants.
Disclaimer
The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.
References
AfrEA, 2007, African evaluation guidelines – Standard and norm, viewed 24 January 2019, from http://www.afrea.org/?page=EvaluationGuildline.
Akins, R., Tolson, H. & Cole, B., 2005 ‘Stability of response characteristics of a delph panel: Application of bootstrap data expansion’, BMC Medical Research Methodology 5(37), 1–12, viewed 12 December 2018, from https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-5-37.pdf.
Brewer, J. & Hunter, A., 2006, Foundations of multimethod research: Synthesizing styles, Sage, Thousand Oaks, CA.
Canas, A., Novak, J., Reiska, P. & Ahlberg, M. (eds.), 2008, ‘Concept mapping: Connecting educators’, Proceedings of the 3rd International Conference on Concept Mapping, University of Helsinki, Yliopistonkatu, September 22–25, 2008.
Chilisa, B., 2012, Indigenous research methodologies, Sage, Thousand Oaks, CA.
Chilisa, B., 2015, A synthesis paper on the made in Africa evaluation concept, viewed 12 October 2018, from http://afrea.org/wp-content/uploads/2018/02/MAE-Chilisa-paper-2015-docx.pdf.
Chilisa, B. & Malunga, C., 2012, ‘Made in Africa Evaluation: Uncovering African roots in evaluation theory and practice’, Paper presented at the Bellagio Conference, November 14–17, 2012.
Chilisa, B. & Mertens, D., 2021, ‘Indigenous Made in Africa Evaluation frameworks: Addressing epistemic violence and contributing to social transformation’, American Journal of Evaluation 42(2), 241–253. https://doi.org/10.1177/1098214020948601
Cloete, F., 2016, ‘Developing an African-rooted program evaluation approach’, African Journal of Public Affairs 9(4), 55–70.
Cloete, F. & Auriacombe, C., 2019, ‘Revisiting decoloniality for more effective research and evaluation’, African Evaluation Journal 7(1), a363. https://doi.org/10.4102/aej.v7i1.363
Corbin, J. & Strauss, A., 2015, Basics of qualitative research: Techniques and procedures for developing grounded theory, Sage, Thousand Oaks, CA.
Cram, F., 2018, ‘Conclusion: Lessons about indigenous evaluation’, Indigenous Evaluation. New Direction for Evaluation 2018(159), 121–133. https://doi.org/10.1002/ev.20326
Creamer, E.G., 2017, An introduction to fully integrated mixed method research, Sage, Thousand Oaks, CA.
Dalkey, N.C., 1972, ‘The Delphi method: An experimental study of group opinion’, in N. Dalkey, D. Rourke, R. Lewis & D. Snyder (eds.), Studies in the quality of life: Delphi and decision-making, pp. 13–54, Lexington Books, Lexington, KY.
De Sousa Santos, B., 2018, The end of the cognitive empire: The coming of age of epistemologies of the South, Duke University Press, Durham.
Dillman, D.A., Smyth, J.D. & Christian, L.M., 2014, Internet, phone, mail, and mixed-mode surveys: The tailored design method, John Wiley & Sons, Hoboken, NJ.
Fanon, F., 1965, The wretched of the earth, Grove Press, New York, NY.
Fournier, D., 2005, ‘Evaluation’, in S. Mathison (eds.), Encyclopedia of evaluation, pp. 139–140, Sage, Thousand Oaks, CA.
Glaser, B.G. & Strauss, A.L., 1967, The discovery of grounded theory: Strategies for qualitative research, Aldine, Chicago, IL.
Harvey, D., 2007, A brief history of neoliberalism, Oxford University Press, Oxford.
Helmer-Hirschberg, O., 1967, Analysis of the future: The Delphi method, RAND Corporation, viewed 05 January 2019, from https://www.rand.org/pubs/papers/P3558.html.
Hsu, C. & Sanford, B.A., 2007, ‘Delphi technique: Making sense of consensus’, Practical Assessment, Research, and Evaluation 12(10), 1–8, 31–46.
Lundgren, L. & Peacock, C., 2010, ‘Postcolonialism and development – A critical analysis of The European consensus on development’, LUP student paper, Dept. of political science, LUND University.
McMichael, P. & Weber, H., 2020, Development and social change, 7th edn., Sage, Thousand Oaks, CA.
Mertens, D., 2008, Transformative research and evaluation, Guilford, New York, NY.
Mertens, D.M. & Wilson, A.T., 2012, Program evaluation theory and practice: A comprehensive guide, Guilford Press, New York, NY.
Mouton, C., Rabie, B., Cloete, F. & De Coning, C., 2014, ‘Historical development & practice of evaluation’, in F. Cloete, B. Rabie & C. De Coning (eds.), Evaluation management in South Africa and Africa, pp. 28–78, African Sun Media, Stellenbosch.
Omosa, O., 2016, ‘Culturally responsive evaluation: Perceptions of Nigerian evaluators’, Paper presented at the Annual Conference of the American Evaluation Association, Atlanta, GA, 22nd–29th October.
Omosa, O., 2019, Towards defining made in Africa evaluation, PhD dissertation, Virginia Tech, Blacksburg, viewed 12 October 2018, http://hdl.handle.net/10919/88834.
Tamale, S., 2020, Decolonization and afro-feminism, Daraja Press, Cantley.
Thomas, V. & Madison, A., 2010, ‘Integration of social justice into the teaching of evaluation’, American Journal of Evaluation 31(4), 570–583. https://doi.org/10.1177/1098214010368426
Tiffin, H., 1995, ‘Post-colonial literature and counter-discourse’, in B. Ashcroft, G. Griffiths & H. Tiffin (eds.), The post-colonial studies readers, pp. 95–98, Routledge, New York, NY.
Uwizeyimana, D., 2021, ‘Ubuntu and the challenges of Africa-rooted public policy evaluation approach’, Journal of African Foreign Affairs 7(3), 113–129.
Vo, A.N., 2013, ‘Towards a definition of evaluative thinking’, PhD thesis, Dept. of Education, University of California, Los Angeles, CA.
|