About the Author(s)


Mark Abrahams Email symbol
Southern Hemisphere Consultants (Pty)Ltd, Cape Town, South Africa

Florence Etta symbol
TY Danjuma Foundation, Lagos, Nigeria

Michele Tarsilla symbol
UNICEF, Dakar, Senegal

Kambidima Wotela symbol
Department of Management Studies, Faculty of Commerce, University of the Witwatersrand, Johannesburg, South Africa

Citation


Abrahams, M., Etta, F., Tarsilla, M. & Wotela, K., 2021, ‘Evidence-based decision-making in the era of big data’, African Evaluation Journal 9(1), a602. https://doi.org/10.4102/aej.v9i1.602

Editorial

Evidence-based decision-making in the era of big data

Mark Abrahams, Florence Etta, Michele Tarsilla, Kambidima Wotela

Copyright: © 2021. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Evidence-based policymaking and decision-making, and evidence-informed strategies are becoming regular mantras in the governance and management discourse of governments and corporates across Africa and internationally. The incredible range of possibilities that can be extracted from available big data has still not been fully utilised by governments. However, some private companies are getting better at creating products and services based on data generated by the actions and behaviours of their clients. Governments are responsible for the delivery of basic services at the local level as well as huge infrastructure requirements needed for daily use, ongoing economic development and even recreation. The Africa Evidence Network’s (AEN) Evidence Week 2021 events revealed that some government sectors, for example in Benin, Kenya and South Africa, have been experimenting with governance systems where 60% of the resources and efforts were focused on capacity building, and 40% on evidence use. This strategy has also been inverted wherein 60% of the resources were allocated to evidence gathering and manipulation, and 40% focused on capacity development. This is indeed very encouraging.

However, as part of our reflections on developing credible data for good governance, we need to situate our solutions in our historical-socio-political contexts that will determine how such efforts will be taken up by local citizens. We cannot assume that because we used rigorous methodologies to generate data and we packaged the data into useful chunks of information with visual aids using appropriate media, the people will readily accept what we produce or the actions and policies that derive from the data. We find that the evidence-linked actions, measures, precautions, and policies that emerged during the coronavirus disease 2019 (COVID-19) pandemic have increased people’s scepticism about evidence and what counts as evidence. Even highly credible sources of data such as the World Health Organization (WHO) and professional health councils are being challenged. But, why so? Because evidence-informed decisions and actions have resulted in death, hunger, poverty, unemployment, mental health problems, personal and collective suffering, and we need to be cognisant of the perceptions that developed consequently.

The era of big data also includes: the proliferation of mass media, the abundant availability of fake news, and the manipulation of information in the interest of national, regional, and personal political interests. The governance decisions, policies, and strategies, informed by credible evidence, must be seen to be transparent, people-centred, and beneficial to the target population. The challenge is for the governance structures to reflect on and go beyond the generation of credible data. This will enable people on the African continent to trust the data and evidence-based decisions. Monitoring and evaluation (M&E) systems (and trust-worthy personnel) should assist to build and rebuild the trust in evidence.

In this edition, Chapman, Tjasink and Louw (2021) question if the investments in National Evaluation Systems (NESs) can bring about meaningful policy change. They describe the efforts of commissioned external evaluators in developing an evaluation approach to assess the efficacy of some of the most important policies and programmes aimed at supporting South African farmers over two decades. They present a diagnostic evaluation approach that they developed. The approach guides the end users through a series of logical steps to help make sense of an existing evidence base in relation to the root problems addressed, and the specific needs of the target populations. They found a lack of policy coherence in important key areas, most notably extension and advisory services, and microfinance and grants. This was characterised by: (1) an absence of common understanding of policies and objectives, (2) overly ambitious objectives often not directly linked to the policy frameworks, (3) lack of logical connections between target groups and interventions, and (4) inadequate identification, selection, targeting and retention of beneficiaries. The diagnostic evaluation allowed for uniquely cross-cutting and interactive engagement with a complex evidence base. The evaluation process shed light on new evaluation review methods that might work to support a national evaluation system.

Ba (2021) points out that roadblocks to achieving development in Africa are majorly in relation to ineffective development management practices. He presents a framework for the M&E of system effectiveness as a development management tool. According to him, it creates a framework that will help understand better the success factors of an effective M&E system and how they contribute to improved development management. He suggests that there are significant linkages between ‘M&E-System quality’, ‘M&E-Information quality’, and ‘M&E-Service quality’. He concludes that effective M&E system contributes greatly to expand ‘improved policy and programme design’, ‘improved operational decisions’, ‘improved tactical and strategic decisions’, and ‘improved capability to advance development objectives’.

The Made in Africa Evaluation (MAE) theme is explored by Omosa, Archibald, Niewolny, Stephenson, and Anderson (2021) who used the Delphi technique to solicit informed views from expert evaluators working in Africa. The objective of this study was to provide a working definition of MAE, and addressed the following research questions:

  1. How do thought leaders in the African evaluation field define Made in Africa Evaluation?

  2. How are MAE principles operationalised and presented in evaluation reports?

  3. What next steps do African evaluation thought leaders believe are necessary to advance the MAE concept?

Their solicitation efforts yielded the following definition: MAE is Evaluation that is conducted based on African Evaluation Association (AfrEA) standards, using localised methods or approaches with the aim of aligning all evaluations to the lifestyles and needs of affected African peoples while also promoting African values. They offer this as a tentative working definition that has, according to them, the potential to influence the practice, study, and teaching of evaluation in Africa. This theme will again be explored in an upcoming edition of this journal.

The growth of evidence-based decision-making is the focus of the case studies documenting the experiences of evidence use in different public policy spaces in Africa by the authors Amisi, Awal, Pabari, and Bedu-Addo (2021). The article discusses the experiences of evidence use in different public policies in South Africa, Kenya, Ghana, and the Economic Community of West African States (ECOWAS). They state that the use of evidence in policy is complex and requires systems, processes, tools, and information to flow between different stakeholders. They demonstrate how relationships between knowledge generators and users were built and maintained in the case studies, and how these relationships were critical for evidence use.

The case studies demonstrate that initiatives to build relationships between different state agencies, between state and non-state actors, and between non-state actors are critical to enable organisations to use evidence. This can be enabled by the creation of spaces for dialogue that are sensitively facilitated and ongoing for actors to be aware of evidence, understand the evidence and be motivated to use the evidence. They conclude that a reciprocal and trusting relationship between individuals and institutions in different sectors is a conduit through which information flow between sectors, new insights is generated, and evidence used.

Finally, Jansen van Rensburg, and Loye (2021) provide evidence of the need for Gender Transformative Evaluation Training with Young and Emerging African Evaluators. They claim that gender issues and evaluation capacity in the Global North do not necessarily match with those in the Global South. They state that the Global South has rich experiences related to equity and gender and an important group to target to build capacity are young and emerging evaluators (YEE). They report on a study that investigated the needs of YEE in Africa regarding gender responsive evaluation training. They found that only one-third of the respondents had participated in training programs on Gender Transformative Evaluation or a gender focus on evaluation. Topics included evaluating gender focussed interventions (gender analysis and developing recommendations) and also gender transformative aspects of evaluation studies in general (including applying gender perspective to all types of policies, programmes and projects) and participatory approaches to ensure gender equity.

References

Amisi, M.M., Awal, M.S., Pabari, M. & Bedu-Addo, D., 2021, ‘How relationship and dialogue facilitate evidence use: Lessons from African countries’, African Evaluation Journal 9(1), a559. https://doi.org/10.4102/aej.v9i1.559

Ba, A., 2021, ‘How to measure monitoring and evaluation system effectiveness?’, African Evaluation Journal 9(1), a553. https://doi.org/10.4102/aej.v9i1.553

Chapman, S.A., Tjasink, K. & Louw, J., 2021, ‘What works for poor farmers? Insights from South Africa’s national policy evaluations’, African Evaluation Journal 9(1), a548. https://doi.org/10.4102/aej.v9i1.548

Jansen van Rensberg, M.S. & Loye, A.S., 2021, ‘Young and emerging African evaluators’ need for gender responsive evaluation training’, African Evaluation Journal 9(1), a556. https://doi.org/10.4102/aej.v9i1.556

Omosa, O., Archibald, T., Niewolny, K., Stephenson, M. & Anderson, J., 2021, ‘Towards defining and advancing “Made in Africa Evaluation”’, African Evaluation Journal 9(1), a564. https://doi.org/10.4102/aej.v9i1.564



Crossref Citations

No related citations found.