Abstract
Background: African higher education institutions lag behind their global counterparts in the number of research outputs produced. To address this shortcoming, early-career researcher development programmes play a critical role. Monitoring and evaluation (M&E) are vital in assuring that such programmes deliver meaningful outcomes. However, M&E is an expensive process, which is problematic in the resource-constrained context of the African continent. Traditionally, practitioners use expensive data analysis software suites such as the Statistical Package for the Social Sciences (SPSS) for analysing quantitative M&E data. Although open-source programming languages such as Python are free to use, there are no libraries in Python aimed at the analyses needed for quantitative M&E data, resulting in a steep learning curve for new Python users.
Objectives: The objective of this article was to develop a Python library of functions to make Python a user-friendly alternative for analysing quantitative M&E data.
Method: A Python library of functions automating M&E data analysis procedures was developed. The Python M&E library was tested in this article on quantitative evaluation data of an early-career researcher development programme event and the output compared to that obtained using the SPSS general user interface (GUI).
Results: The Python M&E library functions produced identical results to the output produced using the SPSS GUI.
Conclusion: The results showed that the Python M&E library makes Python a viable, free and time-saving alternative for the analysis of quantitative M&E data.
Contribution: This article contributes by providing a free alternative method for analysing quantitative M&E data, which can help evaluation practitioners in the developing world reduce the costs associated with evaluating capacity development programmes.
Keywords: early career researchers; capacity development; monitoring and evaluation; SPSS; Python; open-source programming language; quantitative data analysis; Python library.
Introduction
Higher education institutions (HEIs) play a vital role in human capital development and economic growth worldwide (Adedeji & Campbell 2013; Chankseliani, Qoraboyev & Gimranova 2021; Krishna 2019; Sebola 2022). Universities and HEIs are responsible for training professionals to become teachers, doctors, engineers, lawyers, etcetera, and as such, play a crucial role in developing peaceful, inclusive and just societies (Chankseliani et al. 2021). The role of HEIs has become even more pronounced since the emergence of the so-called knowledge economies (Powell & Snellman 2004). In modern knowledge economies, knowledge, not labour, drives economic growth especially through technological innovations resulting from the production, transmission, dissemination and application of scientific research (Krishna 2019; Tilak 2022). In the current economy, it is unsurprising that research-intensive HEIs and their scientific outputs play an ever more critical role in national innovation systems, societies and economies (Krishna 2019).
Unfortunately, African countries lag behind their counterparts in other world regions in their transition towards knowledge economies, making it difficult for the continent to compete globally (Asongu & Odhiambo 2019; Phale et al. 2021). Given the pivotal role that scientific outputs by research-intensive HEI plays in the growth of knowledge economies, countries that lag in their transition towards knowledge economies would be expected to have lower scientific research outputs than countries well advanced in the transitional process. This is indeed the case for the African continent, with Africa producing less than 4% of the global research output despite making up over 15% of the world’s population (Fonn et al. 2018; Olufadewa, Adesina & Ayorinde 2020; UNESCO 2021). In South Africa, for example, although the number of research publications increased by an average of 9.5% per year between 2011 and 2019, South Africa still produced only 1.8% of the global number of research publications in 2019 (Sebola 2022; UNESCO 2021).
There are myriad reasons for the low research productivity of African countries. A lack of research collaborations and funding, insufficient networks and low research self-efficacy are frequently cited reasons (Igiri et al. 2021; Kanyengo & Smith 2022; Uwizeye et al. 2022). In addition, the rapid growth in student numbers at HEIs across the continent has resulted in academic staff overloaded with teaching and administrative responsibilities, leaving them little time to either conduct research or develop their research competencies to produce quality outputs (Mushemeza 2016; Teferra 2016).
A further threat to research productivity on the continent is the ageing professoriate and the slow pace at which retiring productive academics are replaced by equally productive younger researchers (Hlengwa 2019; Mabokela 2021). Early-career researchers in Africa lack support, mentorship and guidance, are not prioritised by funding agencies when grants are awarded, are overburdened by heavy workloads and do not have the physical infrastructure required to conduct high-quality research (Kumwenda et al. 2017; Salihu Shinkafi 2020). The result is a cohort of early-career researchers who lack the skills, support, time, funding and infrastructure to establish themselves as productive senior researchers and replace their retiring colleagues.
Several programmes designed to build the research capacity of African early-career researchers and increase research outputs have been funded, launched and run by universities, research consortia, international funding bodies and charitable foundations (FSNet-Africa 2022; Kasprowicz et al. 2020; Mackay, Roux & Bouwer 2020; The Association of Commonwealth Universities 2020; University of Cape Town 2020). The Food Systems Research Network for Africa (FSNet-Africa) is one such project. Launched in 2020, FSNet-Africa is a research excellence project funded by the Global Challenges Research Fund (GCRF) under the partnership between the United Kingdom Research and Innovation (UKRI) and the African Research Universities Alliance (ARUA). The project seeks to strengthen African early-career researchers’ capacities to conduct gender-responsive food systems research and translate findings into implementable policy-influencing solutions and practical interventions (FSNet-Africa 2022).
Monitoring and evaluation (M&E) of research capacity-building interventions such as FSNet-Africa have become essential to ensure that the interventions achieve their purpose and deliver meaningful outcomes. In addition, policymakers and funders increasingly demand rigorous M&E processes to monitor the programmes’ management, governance and impact (Adam 2015; Gadsby 2011; Jones-Devitt & Austen 2021; Marjanovic et al. 2017; Marjanovic, Hanney & Wooding 2009). Given the complex nature of capacity-building interventions, determining whether an intervention is ‘successful’ is not always straightforward. Success is often not overtly visible in the complex landscape of capacity-building, manifesting instead as slow and incremental changes over time. It is, therefore, almost impossible to measure what worked and what did not without systematic enquiry (Adam 2015). A well-developed evaluation framework allows the programme implementers to measure these incremental changes effectively by determining if and how well the objectives of an implemented intervention have been met (Jones-Devitt & Austen 2021).
The practice of M&E is, however, not without challenges. Two of the most frequently cited challenges relate to financial and time constraints (INTRAC, PSO & PRIA 2011; Marjanovic et al. 2017; PricewaterhouseCoopers 2019; Sithole 2017). Developing and implementing a comprehensive M&E system is time-consuming and labour-intensive, leading to high costs. While experts estimate that 8% – 10% of a programme’s budget should be allocated to M&E, M&E practitioners are often hampered by financial constraints when commissioned with a programme evaluation (INTRAC et al. 2011; Marjanovic et al. 2017; PricewaterhouseCoopers 2019; Sithole 2017). In the developing world, especially, organisations do not always design a proper plan for programme M&E, resulting in inadequate amounts of money budgeted for M&E activities (Sithole 2017).
Efforts should be made to reduce the cost associated with M&E activities so that programme implementers and other stakeholders can afford to have their programmes properly monitored and evaluated so that they can use the results to adapt their programmes to achieve optimal impact. The data analysis stage of the M&E process can be particularly time-consuming and costly. For example, the Statistical Package for the Social Sciences (SPSS) is one of the most frequently used software tools for analysing data from M&E studies (International Organization for Migration [IOM] 2020; Singh, Chandurkar & Dutt 2017), and, especially in the African context, the data analysis tool most frequently taught in M&E capacity-building programmes and degrees used to train M&E practitioners (Stellenbosch University 2022; UNIMA ICT Centre 2022; University of Cape Town 2022; University of Zambia Centre for Information and Communication Technologies (CICT) 2018). Although SPSS is user-friendly and easy to learn, it comes at a price, with a monthly subscription plan starting at USD 99.00 per user (IBM 2022).
In contrast, open-source programming languages such as Python and R are free to use, more powerful than SPSS and able to handle bigger datasets (Coursera 2023; Ozgur et al. 2021). However, statistics anxiety (a fear of learning and using statistical analysis techniques) is a common problem (Amirian & Abbasi-Sosfadi 2021; Bourne 2018; Chamberlain, Hillier & Signoretta 2015; Dykeman 2011; Fairlamb, Papadopoulou & Bourne 2022; Macher et al. 2013; Shukla & Kumar 2020; Stickels & Dobbs 2007; Teman 2013) and switching from using a general user interface (GUI) programme such as SPSS to analyse M&E data to writing code for an open-source programming language like Python, can be daunting for many. Although Python libraries like NumPy (Harris et al. 2020) and pandas (Mckinney 2011) have made learning how to conduct descriptive statistics in Python much easier, there are currently, to the authors’ knowledge, no libraries in Python that produce output similar to the descriptive statistics output produced when using SPSS. For example, a frequency table produced in SPSS includes the count, percentage and valid percentage all in one table. Seeing percentage (calculated based on the total number of respondents to the survey) and valid percentage (calculated based on the total number of responses to the question) in a single table is very useful for M&E practitioners to get a feel for their data and write comprehensive reports. Functions currently available in the pandas and NumPy libraries for frequency distributions only produce values for the count and valid percentage when executed on a variable in a dataset.
This study aims to describe the development of a Python library, built on the NumPy and pandas libraries, with a set of functions specific to the needs of M&E practitioners. Selected questions from a dataset from the FSNet-Africa project are analysed using both SPSS and the newly created library, and the output is compared. It is envisaged that this library of M&E data analysis functions will make learning to use Python less daunting for M&E practitioners and reduce the costs associated with the M&E process through practitioners not needing to buy expensive proprietary data analysis software packages for data analysis.
Methods
Developing the monitoring and evaluation library functions
The first function was developed to enable data analysts to produce replications of SPSS-style frequency tables when running descriptive statistics on data by typing a single line of code in Python. The function was built on the pandas crosstab function (NumFOCUS, Inc. 2023) and can be seen in Figure 1.
 |
FIGURE 1: Function 1: Replicating a Statistical Package for the Social Sciences-style frequency table. |
|
The frequency_counts_valid function operates on the columns in a dataset (referred to as a dataframe in Python) and takes two parameters: variable_name and df. Variable_name refers to the variable’s name for which a frequency table is required, and df refers to the name given to the dataset containing this variable.
Next, a function was developed for running analyses on questions where respondents can select more than one option, so-called ‘select all that apply’ questions. This function was developed in two steps. The code written in the first step (Figure 2) again creates a frequency table similar to that produced by SPSS. In the second step, a for-loop was used to apply the code written in the first step to all the options of a select-all-that-apply question. Finally, the code after exiting the for-loop indicates to Python to concatenate all the frequency tables created by the for-loop into a single table (Figure 3). To run this function, analysts need to enter parameters for the original dataset (df_original), a dataset containing only the variables that are part of the select-all-that-apply question (df_saa_question), and a dataset containing only the variables that are part of the select-all-that-apply-question but with all lines with missing data for all options of the select-all-that-apply question dropped (df_saa_question_dropped).
 |
FIGURE 2: Function 2: Creating a frequency table for ‘select all that apply’ questions – Step 1. |
|
 |
FIGURE 3: Function 2: Creating a frequency table for select-all-that-apply questions – Step 2. |
|
As gathering continuous or ordinal data is often needed for the M&E process, a function was also developed to obtain measures of central tendency for continuous or ordinal variables using a single line of code (Figure 4). The function allows analysts to run descriptive statistics on continuous variables by only entering parameters for the dataset (original_df), the variable they want to run descriptive statistics on (variable_name) and how they want the variable name to appear in the descriptive statistics table produced (variable_rename). The descriptive statistics table produced by running this function includes values for the mean, median, mode, standard deviation, range, skewness and kurtosis of the continuous or ordinal variable.
 |
FIGURE 4: Function 3: Obtaining measures of central tendency for continuous and ordinal variables. |
|
In the final step, a function was developed to allow M&E practitioners to run a two-by-two cross-tabulation between two dichotomous variables, enabling them to see the relationship between the two variables broken down into row-wise and column-wise percentages.
To run the function, analysts only need to enter the parameters for the dataset (df), the variable that they want to make up the rows of the cross-tabulation table (row_var), the variable that they want to make up the columns of the cross-tabulation table (column_var) and the names of the two categories of the column variable (var_1 and var_2).
Testing the monitoring and evaluation library of functions
The functions developed were tested by analysing select questions from an evaluation conducted for the United Kingdom (UK) Summer School event that formed part of the FSNet-Africa project using SPSS and the newly created Python functions and comparing the results.
Ethical considerations
Ethical approval for conducting M&E within the FSNet-Africa project and thus obtaining the data for this study was obtained from the University of Pretoria’s Faculty of Natural and Agricultural Sciences Ethics Committee on the 18 June 2021. The ethics approval number is NAS104/2021.
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
All the FSNet-Africa fellows were informed in writing prior to joining the fellowship that all capacity development events that formed part of the project would be monitored and evaluated. Fellows therefore consented to the project using the M&E data they would provide by agreeing to participate in the fellowship.
However, all M&E activities were voluntary, and fellows were free to withdraw from the activities at any time. Furthermore, a data-management plan was written for the FSNet-Africa project that stipulated how the confidentiality of M&E data should be protected. The measures included the fact that all identifying particulars be removed prior to the data being processed. The dataset used for this study, thus did not contain any identifying particulars of any of the fellows who completed the survey.
Results
Testing Function 1: Replicating a Statistical Package for the Social Sciences-style frequency table
To test Function 1, the following question from the UK Summer School evaluation was analysed using SPSS’s GUI and Function 1 in Python, and the results compared: Did you attend the FSNet-Africa UK summer school (either online or in person)?
The Python code and output and the SPSS output can be seen in Table 1.
TABLE 1: Testing Function 1: Comparing Python and Statistical Package for the Social Sciences output. |
As can be seen from Table 1, the analysis results were identical when comparing the output from SPSS and that produced by running Function 1 in Python on the question, Did you attend the FSNet-Africa UK Summer School (either online or in person)? The only difference is that SPSS does not show the missing data line if there are no missing data, whereas the output from Function 1 indicates zero missing cases for this question.
Testing Function 2: Creating a frequency table for ‘select all that apply’ questions
To test Function 2, the following question from the UK Summer School evaluation was analysed using SPSS’s GUI and Function 2 in Python, and the results compared: Please indicate your role in FSNet-Africa by selecting the appropriate options below (select all that apply).
The results of the analyses can be seen in Table 2 and Table 3.
TABLE 2: Testing Function 2: Python code and output. |
TABLE 3: Testing Function 2: Statistical Package for the Social Sciences output. |
As can be seen in Table 2 and Table 3, the percentage and valid percentage for each category specifying participants’ roles in FSNet-Africa were identical between the SPSS and Python outputs when testing Function 2. However, the code written for the analysis in Python allowed the researcher to obtain each category’s percentage and valid percentage in a single analysis and one table. In contrast, the analysis using the GUI of SPSS had to be run twice to obtain the percentage and valid percentage for each category – first with all respondents included (percentage) and, second, after manually deleting respondents who did not respond to any of the categories and could thus be seen as not having responded to the question (valid percentage).
Testing Function 3: Running measures of central tendency for continuous and ordinal variables
Function 3 was tested by comparing the output from running the code for Function 3 in Python and using SPSS’s GUI to obtain measures of central tendency for the following continuous variable in the UK Summer School evaluation: Please indicate how valuable you found the feedback you received from experts after your research project presentation on a scale of 1 to 5, where 1 = not at all valuable and 5 = very valuable (Table 4).
TABLE 4: Testing Function 3: Comparing Python and Statistical Package for the Social Sciences output (N = 6). |
As can be seen from Table 4, the results from using SPSS’s GUI and Function 3 in Python were identical for all the measures of central tendency included in Function 3.
Testing Function 4: Running two-by-two cross-tabulation tables
Function 4 (Figure 5) was tested by running the code for Function 4 in Python and using SPSS’s GUI to obtain a two-by-two cross-tabulation between two dichotomous questions, for which the response distributions are shown in Table 5.
 |
FIGURE 5: Function 4: Creating a two-by-two cross-tabulation including column-wise and row-wise percentages. |
|
TABLE 5: Distribution of questions used to test Function 4. |
From Table 6 it can be seen that the results of the two-by-two cross-tabulations were identical between the Python and SPSS outputs.
TABLE 6: Testing Function 4: Comparing Python and Statistical Package for the Social Sciences output. |
Discussion
The results of this study show that functions developed using the NumPy (Harris et al. 2020) and pandas (Mckinney 2011) libraries in Python can significantly simplify the code needed to replicate M&E analysis results produced using SPSS.
The library of functions developed in this article allows analysts to run frequently used statistical analyses in the M&E field in Python by typing a maximum of two to three lines of easy-to-understand code. The code for regular frequency tables of a single categorical variable replicates the frequency table produced by SPSS almost precisely, including percentage and valid percentage values. The code for select-all-that-apply questions is more time efficient than the options available in SPSS as it produces a single table showing counts, percentages and valid percentages for each question option through only two lines of code. Analysis of select-all-that-apply questions in SPSS requires the user to run a frequency table for each option separately to see the percentage of respondents choosing each option. In addition, if they wish to obtain valid percentages, they need to rerun all the frequency tables after deleting participants who did not select any of the options and could thus be seen as not having answered the question. In addition, the functions developed for obtaining measures of central tendency for continuous or ordinal variables and two-by-two cross-tabulations for two categorical variables also allow users to produce output almost replicating SPSS results by typing a single line of code.
As stated in the introduction, research capacity development programmes such as FSNet-Africa aimed at building the capacities of early-career researchers and improving the quality and quantity of research outputs on the African continent are vital for accelerating the transition of African countries into knowledge economies (FSNet-Africa 2022; Kasprowicz et al.2020; Mackay et al. 2020; The Association of Commonwealth Universities 2020; University of Cape Town 2020). However, the time and financial constraints hampering the effective M&E of such programmes (INTRAC et al. 2011; Marjanovic et al. 2017; PricewaterhouseCoopers 2019; Sithole 2017) hinder their effectiveness as rigorous M&E processes are crucial for measuring programme efficiency and knowing where change and adjustments in programme design and delivery are needed (Adam 2015; Jones-Devitt & Austen 2021).
Open-source programming languages such as Python have the potential to reduce the costs associated with M&E data analysis processes by providing a free alternative to the expensive software usually employed (International Organization for Migration [IOM] 2020; Singh et al. 2017; Stellenbosch University 2022; UNIMA ICT Centre 2022; University of Cape Town 2022; University of Zambia Centre for Information and Communication Technologies [CICT] 2018). However, because of the frequently seen phenomenon of statistics anxiety (Amirian & Abbasi-Sosfadi 2021; Bourne 2018; Chamberlain et al. 2015; Dykeman 2011; Fairlamb et al. 2022; Macher et al. 2013; Shukla & Kumar 2020; Stickels & Dobbs 2007; Teman 2013), M&E practitioners may be reluctant to learn how to use free open-source programming languages for analysing M&E data. The library of functions developed in Python for this article simplifies the code needed to obtain the desired analysis results. It thus helps to flatten the learning curve of the open-source programming language, making it less daunting and more accessible to M&E practitioners. In addition, as shown by the function written for select-all-that-apply questions, open-source programming languages also have the potential to reduce the time needed for data analysis by automating some of the processes that need to be performed manually when using the GUI of programmes such as SPSS.
Conclusion and recommendations for future research
The results of this study prove that open-source programming languages present a free viable alternative to expensive proprietary software packages such as SPSS for M&E data analysis if the code needed to produce similar results can be sufficiently simplified.
Reducing the time and cost associated with M&E data analysis using the above-described library of Python functions can aid in removing the obstacles related to monitoring and evaluating early-career capacity-building programmes such as FSNet-Africa. Proper M&E can lead to more effective capacity-building programmes for up-and-coming researchers, giving them the skills and confidence to transition from early-career researchers to established researchers and eventually provide a critical mass of researchers ready to replace their colleagues reaching the end of their careers. Such a critical mass of productive and capable researchers should also result in an increase in the quality and quantity of African research outputs, ultimately helping to accelerate the transition of African countries towards knowledge economies.
Furthermore, the use of the tool can be expanded by creating communities of practice within and between university networks, expanding its use from M&E practitioners to research project managers and researchers. Using the tool in more diverse contexts may require increasing the variety of functions available. Future research can thus focus on collaborating with developers to expand the variety of easy-to-use functions available in the library for use by M&E practitioners and the research community at large. Simplifying the code needed for more advanced data analysis techniques in Python, such as Chi-square tests of independence, t-tests and analysis of variance, can greatly benefit researchers, M&E practitioners and research project managers tasked with analysing data. Functions can also be developed to simplify the visualisation of data using the Python programming language. Training will still be required for M&E practitioners and research project managers to use Python and the tools developed in this study. It is, therefore, a further recommendation that African Universities consider incorporating the use of open-source programming languages such as Python in their curriculums for M&E programmes instead of teaching students how to use the more expensive commercial data analysis software suites.
Acknowledgements
The authors acknowledge the United Kingdom Research and Innovation (UKRI) for funding the FSNet-Africa project (grant number ES/T015128/1).
Competing interests
The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.
Authors’ contributions
N.F. and M.M.-C. conceived of the presented idea. N.F. developed the Python functions, ran the analyses in Python and SPSS and contributed to writing all sections of the article. M.M.-C. contributed to writing the literature review and edited the article.
Funding information
The authors disclose receipt of the following financial support for the publication of this article: This work was supported by Rhodes University and the University of Pretoria. The FSNet-Africa project was funded by the UKRI (grant number ES/T015128/1).
Data availability
The data used for this study will be available as safeguarded data in the UK Data Service’s ReShare repository upon project completion. However, as the project is not yet completed, the data are not yet available in the repository. Data can be requested from the corresponding author, N.F. upon reasonable request.
Disclaimer
The views and opinions expressed in this article are those of the authors and are the product of professional research. It does not necessarily reflect the official policy or position of any affiliated institution, funder, agency or that of the publisher. The authors are responsible for this article’s results, findings and content.
References
Adam, F., 2015, ‘The strategic role of evaluations: A donor perspective’, in Zenex Foundation M&E Conference, 02 July, viewed 12 August 2022, from https://www.trialogueknowledgehub.co.za/images/topics/monitoring/strategicrole.pdf.
Adedeji, S.O. & Campbell, O., 2013, ‘The role of higher education in human capital development’, SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2380878
Amirian, S.M.R. & Abbasi-Sosfadi, S., 2021, ‘Fear of statistics among TEFL postgraduate students’, Eurasian Journal of Applied Linguistics 7(1), 202–221. https://doi.org/10.32601/ejal.911253
Asongu, S. & Odhiambo, N., 2019, ‘Building knowledge-based economies in Africa: A systematic review of policies and strategies’, SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3472046
Bourne, V.J., 2018, ‘Exploring statistics anxiety: Contrasting mathematical, academic performance and trait psychological predictors’, Psychology Teaching Review 24(1), 35–43. https://doi.org/10.53841/bpsptr.2018.24.1.35
Chamberlain, J.M., Hillier, J. & Signoretta, P., 2015, ‘Counting better? An examination of the impact of quantitative method teaching on statistical anxiety and confidence’, Active Learning in Higher Education 16(1), 51–66. https://doi.org/10.1177/1469787414558983
Chankseliani, M., Qoraboyev, I. & Gimranova, D., 2021, ‘Higher education contributing to local, national, and global development: New empirical and conceptual insights’, Higher Education 81(1), 109–127. https://doi.org/10.1007/s10734-020-00565-8
Coursera, 2023, Python or R for data analysis: Which should i learn?, Coursera, viewed 09 January 2023, from https://www.coursera.org/articles/python-or-r-for-data-analysis.
Dykeman, B.F., 2011, ‘Statistics anxiety: Antecedents and instructional interventions’, Education 132(2), 441–446.
Fairlamb, S., Papadopoulou, H. & Bourne, V.J., 2022, ‘Reach for the STARS? The role of academic contingent self-worth in statistics anxiety and learning’, Learning & Motivation 78, 101815. https://doi.org/10.1016/j.lmot.2022.101815
Fonn, S., Ayiro, L.P., Cotton, P., Habib, A., Mbithi, P.M.F., Mtenje, A. et al., 2018, ‘Repositioning Africa in global knowledge production’, Lancet (London, England) 392(10153), 1163–1166. https://doi.org/10.1016/S0140-6736(18)31068-7
FSNet-Africa, 2022, ‘About us – FSNet Africa food solutions, nutrition, food systems research’, FSNet Africa, viewed 09 January 2023, from https://fsnetafrica.com/about-us/.
Gadsby, E.W., 2011, ‘Research capacity strengthening: Donor approaches to improving and assessing its impact in low- and middle-income countries’, The International Journal of Health Planning and Management 26(1), 89–106. https://doi.org/10.1002/hpm.1031
Harris, C.R., Millman, K.J., Van der Walt, S.J., Gommers, R., Virtanen, P., Cournapeau, D. et al., 2020, ‘Array programming with NumPy’, Nature 585(7825), 357–362. https://doi.org/10.1038/s41586-020-2649-2.
Hlengwa, A.I., 2019, ‘Developing the next generation of university teachers’, Critical Studies in Teaching and Learning (CriSTaL) 7(1), 1–18. https://doi.org/10.14426/cristal.v7i1.170
IBM, 2022, SPSS statistics – Pricing, viewed 09 January 2023, from https://www.ibm.com/products/spss-statistics/pricing.
Igiri, B.E., Okoduwa, S.I.R., Akabuogu, E.P., Okoduwa, U.J., Enang, I.A., Idowu, O.O. et al., 2021, ‘Focused research on the challenges and productivity of researchers in Nigerian academic institutions without funding’, Frontiers in Research Metrics and Analytics 6, viewed 01 September 2023, from https://www.frontiersin.org/articles/10.3389/frma.2021.727228.
International Organization for Migration (IOM), 2020, IOM monitoring and evaluation guidelines, IOM, Geneva, viewed 14 December 2022, from https://publications.iom.int/books/iom-monitoring-and-evaluation-guidelines.
INTRAC, PSO & PRIA, 2011, Monitoring and evaluation: New developments and challenges, Conference Report, INTRAC, Soesterberg.
Jones-Devitt, S. & Austen, L., 2021, A guide to basic evaluation in higher education (Why needed and how to do it), Staffordshire University, Staffordshire, viewed 08 December 2022, from https://www.enhancementthemes.ac.uk/docs/ethemes/about-us/eval-guide_qaas_pdf.pdf?sfvrsn=fff7d781_4.
Kanyengo, C.W. & Smith, G.J., 2022, ‘Factors affecting knowledge production, diffusion and utilisation at the University of Zambia School of Medicine’, Library Philosophy and Practice (e-journal), viewed 01 September 2022, from https://digitalcommons.unl.edu/libphilprac/7104.
Kasprowicz, V.O., Chopera, D., Waddilove, K.D., Brockman, M.A., Gilmour, J., Hunter, E. et al., 2020, ‘African-led health research and capacity building- is it working?’, BMC Public Health 20(1), 1104. https://doi.org/10.1186/s12889-020-08875-3
Krishna, V.V., 2019, ‘Universities in the national innovation systems: Emerging innovation landscapes in Asia-Pacific’, Journal of Open Innovation: Technology, Market, and Complexity 5(3), 43. https://doi.org/10.3390/joitmc5030043
Kumwenda, S., Niang, E.H.A., Orondo, P.W., William, P., Oyinlola, L., Bongo, G.N. et al., 2017, ‘Challenges facing young African scientists in their research careers: A qualitative exploratory study’, Malawi Medical Journal: The Journal of Medical Association of Malawi 29(1), 1–4. https://doi.org/10.4314/mmj.v29i1.1
Mabokela, R.O., 2021, ‘Equity and access to higher education twenty-five years later: A review of enrollment, graduation and employment trends in South African universities’, in H. Eggins, A. Smolentseva & H. de Wit (eds.), Higher education in the next decade, pp. 149–164, Brill.
Macher, D., Paechter, M., Papousek, I., Ruggeri, K., Freudenthaler, H.H. & Arendasy, M., 2013, ‘Statistics anxiety, state anxiety during an examination, and academic achievement’, British Journal of Educational Psychology 83(4), 535–549. https://doi.org/10.1111/j.2044-8279.2012.02081.x
Mackay, B., Roux, J.-P. & Bouwer, R., 2020, Building research capacity in early career researchers: Insights from an international climate research programme, Future Climate for Africa, Cape Town.
Marjanovic, S., Cochrane, G., Robin, E., Sewankambo, N., Ezeh, A., Nyirenda, M. et al., 2017, ‘Evaluating a complex research capacity-building intervention: Reflections on an evaluation of the African Institutions Initiative’, Evaluation 23(1), 80–101. https://doi.org/10.1177/1356389016682759
Marjanovic, S., Hanney, S. & Wooding, S., 2009, A historical reflection on research evaluation studies, their recurrent themes and challenges, RAND Corporation, viewed 08 December 2022, from https://www.rand.org/pubs/technical_reports/TR789.html.
McKinney, W., 2011, ‘Pandas: A foundational Python library for data analysis and statistics’, Python for High Performance and Scientific Computing 14(9), 1–9.
Mushemeza, E.D., 2016, ‘Opportunities and challenges of academic staff in higher education in Africa’, International Journal of Higher Education 5(3), 236–246. https://doi.org/10.5430/ijhe.v5n3p236
NumFOCUS, Inc., 2023, pandas.crosstab – Pandas 2.0.3 documentation, viewed 07 July 2023, from https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html.
Olufadewa, I.I., Adesina, M.A. & Ayorinde, T., 2020, ‘From Africa to the world: Reimagining Africa’s research capacity and culture in the global knowledge economy’, Journal of Global Health 10(1), 010321. https://doi.org/10.7189/jogh.10.010321
Ozgur, C., Colliau, T., Rogers, G. & Hughes, Z., 2021, ‘MatLab vs. Python vs. R’, Journal of Data Science 15(3), 355–372. https://doi.org/10.6339/JDS.201707_15(3).0001
Phale, K., Li, F., Adjei Mensah, I., Omari-Sasu, A.Y., & Musah, M., 2021, ‘Knowledge-based economy capacity building for developing countries: A panel analysis in Southern African Development Community’, Sustainability 13(5), 2890. https://doi.org/10.3390/su13052890
Powell, W.W. & Snellman, K., 2004, ‘The knowledge economy’, Annual Review of Sociology 30(1), 199–220. https://doi.org/10.1146/annurev.soc.29.010202.100037
PricewaterhouseCoopers, 2019, Challenges and solutions in monitoring and evaluating international development cooperation: Exploring the role of digital technologies and innovation methodologies, PricewaterhouseCoopers, viewed 13 December 2022, from https://www.pwc.nl/nl/assets/documents/challenges-in-digital-innovation.pdf.
Salihu Shinkafi, T., 2020, ‘Challenges experienced by early career researchers in Africa’, Future Science OA 6(5), FSO469. https://doi.org/10.2144/fsoa-2020-0012
Sebola, M.P., 2022, ‘South Africa’s public higher education institutions, university research outputs, and contribution to national human capital’, Human Resource Development International 26(2), 217–231. https://doi.org/10.1080/13678868.2022.2047147
Shukla, S. & Kumar, R., 2020, ‘Researcher intention to use statistical software: Examine the role of statistical anxiety, self-efficacy and enjoyment’, International Journal of Technology and Human Interaction (IJTHI) 16(3), 39–55. https://doi.org/10.4018/IJTHI.2020070103
Singh, K., Chandurkar, D. & Dutt, V., 2017, A practitioners’ manual on monitoring and evaluation of development projects, Cambridge Scholars Publishing, Newcastle upon Tyne.
Sithole, P.M., 2017, ‘Challenges in conducting development evaluation: Dealing with perceptions’, in eVALUation Matters, First Quarter, viewed 13 December 2022, from https://idev.afdb.org/sites/default/files/Evaluations/2020-02/Challenges%20in%20Conducting%20Development%20by%20Pindai%20M%20Sithole.pdf
Stellenbosch University, 2022, Monitoring and evaluation methods, Stellenbosch University, viewed 14 December 2022, from http://www0.sun.ac.za/crest/wp-content/uploads/2018/05/postgrad_studies_brochure.pdf.
Stickels, J.W. & Dobbs, R.R., 2007, ‘Helping alleviate statistical anxiety with computer aided statistical classes’, Journal of Scholarship of Teaching and Learning 7(1), 1–15.
Teferra, D., 2016, ‘Conclusion: The era of mass early career academics and aging faculty – Africa’s paradox’, Studies in Higher Education 41(10), 1869–1881. https://doi.org/10.1080/03075079.2016.1221650
Teman, E.D., 2013, ‘A Rasch analysis of the Statistical Anxiety Rating Scale’, Journal of Applied Measurement 14(4), 414–434.
The Association of Commonwealth Universities, 2020, Climate Impacts Research Capacity and Leadership Enhancement (CIRCLE), The Association of Commonwealth Universities, viewed 06 October 2020, from https://www.acu.ac.uk/get-involved/circle/.
Tilak, J.B.G., 2022, ‘Universities in the knowledge society: The Nexus of national systems of innovation and higher education edited by TimoAarrevaara, MartinFinkelstein, Glen A.Jones, and JisunJung, Cham, Springer, 2021, ix + 434 pp.’, The Developing Economies 60(2), 107–110. https://doi.org/10.1111/deve.12305
UNESCO, 2021, Statistics and resources | 2021 Science Report, viewed 27 September 2023, from https://www.unesco.org/reports/science/2021/en/statistics.
UNIMA ICT Centre, 2022, Announcements, University of Malawi, viewed 14 December 2022, from https://www.unima.ac.mw/announcements/short-courses-in-project-monitoring-evaluation-data-analysis-in-spss-stata-and-questionaire-design-using-cspro-02-09-2022.
University of Cape Town, 2020, Structured Training for African Researchers (STARS), Centre for Innovation in Learning and Teaching | University of Cape Town, viewed 06 October 2020, from http://www.cilt.uct.ac.za/cilt/stars-special-project.
University of Cape Town, 2022, Masters in programme evaluation, management studies – Section of organisational psychology, viewed 14 December 2022, from http://www.organisationalpsychology.uct.ac.za/orgpsy/Masters-in-Programme-Evaluation.
University of Zambia Centre for Information and Communication Technologies (CICT), 2018, Monitoring and Evaluation Centre of Excellence, University of Zambia, University of Zambia, viewed 14 December 2022, from https://www.unza.zm/academics/short-courses/mecex.
Uwizeye, D., Karimi, F., Thiong’o, C., Syonguvi, J., Ochieng, V., Kiroro, F. et al., 2022, ‘Factors associated with research productivity in higher education institutions in Africa: A systematic review’, AAS Open Research 4, 26. https://doi.org/10.12688/aasopenres.13211.2
|