About the Author(s)


Petronella Jonck symbol
Faculty of Economic and Management Sciences, North-West University, Mafikeng, South Africa

Riaan de Coning Email symbol
Faculty of Economic and Management Sciences, University of Stellenbosch Business School, Cape Town, South Africa

Citation


Jonck, P. & De Coning, R., 2020, ‘A quasi-experimental evaluation of a skills capacity workshop in the South African public service’, African Evaluation Journal 8(1), a421. https://doi.org/10.4102/aej.v8i1.421

Original Research

A quasi-experimental evaluation of a skills capacity workshop in the South African public service

Petronella Jonck, Riaan de Coning

Received: 09 Aug. 2019; Accepted: 06 Dec. 2019; Published: 31 Mar. 2020

Copyright: © 2020. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: A paucity of evaluation studies could be identified that investigated the impact of training. The lacuna of research should be viewed in light of austerity measures as well as inability to measure return of investment on training expenditure, which is substantial year on year, especially in the context of public service.

Objectives: This article reports on an impact evaluation of a research methodology skills capacity workshop.

Method: A quasi-experimental evaluation design in which comparison groups were utilised to evaluate the impact of a research methodology skills development intervention. A paired-sample t-test was used to measure the knowledge increase whilst controlling for the influence of comparison groups by means of an analysis of variance. A hierarchical multiple regression analysis was performed to determine how much of the variance in research methodology knowledge could be contributed to the intervention whilst controlling for facilitator effect.

Results: Results indicated that the intervention had a statistically significant impact on research methodology knowledge. Furthermore, the intervention group significantly differed statistically from the control and comparison groups with respect to research methodology knowledge. Facilitator effect was found to be a moderating variable. Hierarchical regression analysis performed to isolate the impact of intervention in the absence of facilitator effect revealed a statistically significant result.

Conclusion: The study augments the corpus of knowledge by providing evidence of training impact within the South African public service, especially utilising a quasi-experimental pre-test–post-test research design and isolating the impact of facilitator effect from the intervention itself.

Keywords: skills development; evaluation; training impact; education; South African public service.

Introduction

National Treasury (2018) extrapolated skills development expenditure by national government amounted to R2.795 billion in the financial year 2017–2018 notwithstanding skills grants by Sector Education and Training Authorities (SETAs). Johanson and Adams (2004) asserted that in order to meet the skill requirements of economies, and individuals, skills development systems must, inter alia, offer meaningful and quality skills development whilst simultaneously avoiding high costs and inefficient implementation. Research conducted by McLaverty (2007) underscoring the importance of research skills for the contemporary public service concluded that regardless of the involvement of public administrators in independent research, the demands of the evolving public sector require public servants to possess some form of research skills, which should be at least on an interpretive level. The broader argument for fostering research skills is premised on the National Skills Development Strategy (NSDS III; Department of Higher Education and Training 2011), which declares that benefits derived from the knowledge economy are determined by the capacity to conduct innovative research and apply new knowledge in the workplace.

Administrations in developing countries, such as Brazil, are often characterised as large and ineffective (Jaimovich & Rud 2014). Whilst countries such as China and Chile have had some success embarking on large-scale public sector reforms underpinned by, amongst others, new public management theories, New Public Management aims to respond to the shortcomings of traditional government systems by employing private sector principles such as Total Quality Management. Dass and Abbott (2008) noted that ongoing training and development of public sector staff is a principal tenet of New Public Management. In spite of the crucial role of training and development, a 2008 review conducted by South African SETAs revealed that the skills development system suffers from, amongst others, inadequate monitoring and evaluation (Marock et al. 2008), which is suggestive of shortcomings in the skills development system, thus raising concern about the quality of skills development. The research reported in the ensuing manuscript furthermore addresses a gap identified by the Parliamentary Monitoring Group on Public Service and Administration, Performance Management and Evaluation (2017), questioning the National School of Government regarding the effect of training courses being provided. In response to the Parliamentary Monitoring Group on Public Service and Administration (2017), the accounting officer of the National School of Government indicated that an impact assessment study was necessary. The specific applicability of the quasi-experimental approach to impact evaluation studies is described by Pillay, Juan and Twalo (2012), asserting that such investigations provide the best example to objectively determine the effect of skills development interventions.

Aim and objectives

The aim of the research under study was to evaluate the impact of a research methodology skills development intervention on the knowledge of participants. For the sake of clarity, an impact evaluation in accordance with Rogers (2012) as cited in Jonck, De Coning and Radikonyana (2018:2) can be defined as any evaluation that systematically and empirically investigates the impact produced by an intervention. Moreover, Rogers (2014) emphasised the importance of the aforementioned evaluation which is to provide empirical evidence about the change (if any) that can be attributed to the intervention and could be undertaken on a capacity-building workshop. In support of the research aim, the study was conducted with the following objectives:

  • to determine whether the research methodology training intervention resulted in a statistically significant increase in research methodology knowledge
  • to determine whether previous training, topic engagement and facilitator effect had a statistically significant influence on research methodology knowledge influencing the impact of the skills development intervention.

Conceptual framework

The seminal training evaluation typology of Kirkpatrick and Kirkpatrick (2006) is frequently employed and it illustrates the positioning of knowledge transfer in the broader skills development evaluation process. A brief summary of the aforementioned typology demonstrates that on the first level that is reaction, the satisfaction of training participants is measured. Secondly, learning focuses on the increase of knowledge and/or skills of trainees by, amongst others, conducting a knowledge test. Thirdly, behavioural changes after the training intervention are measured. Lastly, return on investment is determined by underscoring the effect of training at an organisational level (Kirkpatrick & Kirkpatrick 2006). Jasson and Govender (2017) affirmed that determining the knowledge outcome (i.e. referred to as learned behaviour) is a necessary precursor influencing the subsequent applied behavioural change, return on investment and ultimately risk management.

Against the stated background, the effectiveness of skills development interventions is premised on a plethora of facets, with knowledge increase representing a singular part of the evaluation of skills development programmes (Jasson & Govender 2017). Specifically, training is an organisational response to an expressed need by employees for identified knowledge and capabilities to perform job functions more effectively. Pursuant to the training intervention, trainees are supposed to utilise and/or implement these newly acquired knowledge and skills to increase job performance (viz. behavioural change), which should ultimately result in return on investment for the organisation (Mooney & Brinkerhoff 2008). The aforementioned behavioural change epitomises the fundamental logic of training isolating the knowledge outcome as part of the broader training evaluation cycle. The importance of testing acquired knowledge is a keystone as Mooney and Brinkerhoff (2008:97), for example, asserted that it supports trainees to consolidate new knowledge and gauge their own mastery whilst providing training facilitators and management with an opportunity to determine whether more, or less, instructions are required.

Empirical evidence suggests that transfer of learning is not achieved ordinarily; furthermore, there is a limited knowledge base emphasising input factors, such as the type of training programme and facilitator effect, and the impact thereof on transfer of learning (Nikandrou, Brinia & Bereri 2009). Three extraneous variables examined by the research understudy subsume facilitator effect, topic engagement and previous training. Despite adult learning being innately learner-directed, the role of a facilitator is a keystone for effective learning. Nikandrou et al. (2009) confirmed that the facilitator could have a statistically significant influence on the learning phase. Various factors related to the facilitator are deemed essential for effective training and are scrutinised for research endeavours. Foley, Nesbit and Leach (as cited in Foley 2004) postulated that clarity of presentation, lesson structuring, verbal fluency and other qualities, such as enthusiasm, warmth and confidence, would affect acquisition of knowledge. Knowles, Holton and Swanson (2005) describe the most important characteristics of a facilitator as possessing a sound conceptual and theoretical understanding of adult learning as well as capacity to design and implement learning opportunities.

Previous training is also posited to influence training results. Jonck et al. (2018), reflecting on the findings of previous research, noted that it is assumed that training courses cumulatively increase an individual’s knowledge base. Research findings by Hailikari, Katajavuori and Lindblom-Ylanne (2008) have indicated that students who retain relevant prior knowledge from previous training were likely to perform better in the future related courses. It is therefore expected that both previous training and facilitator effect could have a statistically significant effect on knowledge increase. Topic engagement in the context of learning has been described as the directing of learning efforts towards psycho-motive, cognitive and affective activities (Kahn 1990), whilst Schaufeli and Bakker (2004) deemed engagement to be a positive, fulfilling, work-related state of mind, characterised by vigour, dedication and absorption. Noe, Tews and Dachner (2010) found that topic engagement has a statistically significant influence on the outcome of a skills development intervention.

Research methods and design

The research reported in this article forms part of a longitudinal study examining the influence of a training intervention within the context of public service. To this end, a quantitative approach was utilised as described sequentially.

Study design

This study adopted a quasi-experimental research design in which two comparison groups were utilised to evaluate the impact of a research methodology skills development intervention. More specifically, a pre-test–post-test design was implemented in the case of an intervention group to determine whether knowledge increase had occurred. Single-difference impact estimates were used with reference to the comparison groups consisting of a group receiving no intervention (viz. control group) and a group receiving an alternative intervention. For the sake of clarity, the comparison group could consist of units receiving either no treatment or an alternative treatment (Edmonds & Kennedy 2017). To clearly distinguish the comparison group receiving no treatment from the one receiving alternative treatment, the term ‘control group’ was used in the current research. White and Sabarwal (2014:9) specified that single-difference impact estimates compare outcome from the intervention group with the same in the comparison groups at a single point in time following the intervention. Quasi-experimental designs have been used extensively to determine the effectiveness of training interventions (see, e.g., Brutus & Donia 2010; Fjuk & Kvale 2018; Shannonhouse et al. 2017). Elaborating on the stated designs, the fundamental assumption of an impact assessment is that an intervention has defined outcomes (Jonck et al. 2018). White and Sabarwal (2014) noted that quasi-experimental research designs test causality in which the workshop is viewed as an ‘intervention’ evaluated to ascertain the efficacy thereof, measured by a predetermined measuring instrument.

Research setting

This research was conducted at public service training institutions and/or facilities both nationally and provincially. More specifically, the treatment intervention occurred at a national public service training institution, whilst the training intervention received by the comparison group took place at a provincial training facility. The control group was not exposed to a training intervention. Nonetheless, measuring instrument was implemented at a provincial facility. The research process with reference to the intervention and comparison groups was similar in that skills development facilitators requested training to be conducted. Pursuant to the identification of training need, the skills development facilitators requested the training intervention in accordance with the identified need. Consultations revealed that the request was premised on participants’ lacking of research capacity to complete their higher education postgraduate studies, negatively influencing bursary requirements and fruitless expenditure resulting in audit findings and disciplinary action (Dlomo 2017). Thus, the participants volunteered to undergo training (Jonck et al. 2018). The control group completed the measuring instrument without uptake of training intervention. Lastly, the comparison group completed a research methodology component as a prerequisite for an accredited course. Participants, part and parcel of comparison group, hypothetically had greater motivation to acquire research methodology knowledge as this could result in certification relating to an accredited course. It should be noted that the research methodology component of the alternative treatment and the research methodology workshop had similar facilitators albeit different course content to minimise the hypothesised facilitator effect.

Study population and sampling strategy

The study population for the research under discussion comprised working-age employees currently employed permanently or on a fixed-term contract in public service either nationally or provincially in need of or envisioned to be in need of research methodology knowledge. The unit of analysis was on the micro level, thus individual participants exposed to or anticipated to be exposed to a training intervention, namely, treatment or alternative training intervention. The study utilised a non-random sampling technique, namely, convenience sampling (i.e. participants who volunteered to attend training and/or identified the need for training). An acknowledged caveat of the sampling procedure relates to the inability to generalise the findings to the population, as non-probability sampling adversely influences the external validity of results. Secondly, the total sample consisted of 70 respondents; thus, the findings are based on a relatively small sample, which cannot be perceived as representative of the study population. Nevertheless, as mentioned in Jonck et al. (2018:6), the aim in reporting the results was not to generalise the findings to the larger population, which would necessitate a more adequate sample size, but to report on findings within the scope of the sample.

The sample size for the intervention group was 33, for the control group it was 22 and for the comparison group it was 15. The response rate for the intervention group was 96.9% and that for the comparison groups was 60%.

Research participants

The final sample consisted of 70 public service employees. Of the participants, 55.1% (n = 38) respondents indicated their gender as female. The remaining 44.9% (n = 31) were males. Three government departments took part in the study, with the majority of sample originating from the national sphere of government (n = 2; 66.7%) and one provincial department (n = 1; 33.3%). Pursuantly, 78.6% (n = 55) of the sample was employed in a national department, whilst 21.4% (n = 15) was employed at provincial level. The age distribution represented a continuum fluctuating between 25 and 65 years, with a well-represented distribution. Per se, 8.6% (n = 6) of the respondents were aged 25 years or younger, 21.4% (n = 15) respondents were aged between 26 and 35 years, 32.9% (n = 23) were between 36 and 45 years, 30% (n = 21) were between 46 and 55 years, whilst 7.1% (n = 5) were aged between 56 and 65 years. Concerning the highest academic qualification, the majority (n = 24; 34.3%) of participants had an honours degree, followed by a bachelor’s degree (n = 15; 21.4%), a master’s qualification (n = 12; 17.1%), a diploma (n = 9; 12.9%), a postgraduate qualification (n = 7; 1%) and lastly a grade 12 diploma with certificates (n = 3; 4.3%). The educational distribution provided support for the contention that workshop attendance was premised on respondents’ ability to complete successfully postgraduate studies. Pertaining to previous training, 67.1% (n = 47) of the sample noted that they have had previous research methodology training (i.e. analytical methods and ethics in research), whilst 32.9% (n = 23) of respondents had no previous training.

In agreement with a quasi-experimental research design, which, by definition, excludes random assignment (White & Sabarwal 2014:1), the sample was more or less equally divided into 47.1% (n = 32) intervention group and 52.9% (n = 37) comparison group respondents. The comparison group could be further divided into 59.5% (n = 22) control group receiving no intervention and 40.5% (n = 15) comparison group participants who underwent a different intervention, however, on a similar topic (i.e. research methodology unit of an accredited course). Topic engagement was determined by considering the respondents’ intention to pursue research-related future training. Of the 70 participants, 25.7% (n = 18) replied positively, 34.3% (n = 24) replied alternatively negatively, whilst 40% (n = 28) indicated that they would be interested in other training interventions not related to research methodology. For the sake of interest, the respondents were requested to indicate the environmental support they would require with specific reference to research methodology knowledge. The sample consisted of 27.1% (n = 19) respondents requiring training, 2.9% (n = 2) in need of coaching, 11.4% (n = 8) requesting mentoring, 7.1% (n = 5) needing e-learning support, 38.6% (n = 27) calling for all of the above and 12.9% (n = 9) indicating they require none of the above.

Intervention

The treatment intervention comprised two units – qualitative and quantitative sections – implemented over a 2-day period. Moreover, the workshop had two main learning goals: (1) to provide a preliminary theoretical and conceptual underpinning of research methodology as subject differentiating between the various paradigms as well as approaches, and (2) providing participants with a brief introduction to two main methodologies, namely qualitative and quantitative research methods. The first-mentioned learning goal concentrated on the fundamentals of research, distinguishing between information-seeking and empirical research. In order to address the first learning goal, that is, to provide a theoretically and conceptually thorough underpinning, three research methodologies, namely, quantitative, qualitative and mixed methods, were underscored before attention was focused exclusively on qualitative research (Jonck et al. 2018) addressing the second learning goal. The second day commenced with a summary of the previous day learning, with specific reference to what constitutes research and the various methodologies. However, more in-depth attention was focused on the underlying paradigms that underpin various methodologies (i.e. positivistic and post-positivistic paradigms) as well as the research process commencing with the identification of a research problem. The course outline in both sections encompassed the underlying assumption(s) of methodology, data collection, development and implementation of data-collecting instruments, sampling strategies, coding and capturing data and, finally, data analysis. Furthermore, teaching aids utilised included PowerPoint presentations, flip charts and electronic devices. The mode of delivery encompassed traditional lecturing, class discussions and practical exercises on a resource compact disk (CD) distributed to participants along with the learning guide (Jonck et al. 2018).

The comparison group was exposed to a research methodology module, part and parcel of an accredited course, with specified unit standards and credits. The specific outcomes of the unit standard subsume (1) demonstrating an understanding of the research design and methodology in a specific context, (2) collection of appropriate data in accordance with a research plan and aligned to specified indicators, (3) analysis as well as interpretation of collated data and finally (4) presenting the findings and recommendations. The workshop commenced with the application of quantitative research concepts, types of data, differences in the analysis of discrete and continuous data and basic statistical principles. The second day summarised the application of basic statistical analyses and introduced hypothesis testing in addition to mentioning briefly the different methodologies. Training aids utilised during the implementation of the training intervention subsume PowerPoint presentations, whilst the facilitator guide specified that the training room should ideally be set up in a u-shaped space. The mode of delivery was specified to be a case study approach.

It should be noted that the control group was not exposed to an intervention.

Data collection

An unabridged self-constructed instrument measuring research methodology knowledge comprising three sections was utilised to gather primary data. Section A inquired about the respondents’ biographical information including gender, age and qualification, to mention but a few. Section B, comprising 28 items, necessitated the respondents to reply on a 4-point Likert scale, where 1 represents ‘strongly agree’ and 4 represents ‘strongly disagree’ with assessed research methodology knowledge. Section C provided the respondents with an opportunity to identify the future research-related training needs utilised to determine respondents’ topic engagement (i.e. respondents’ aspiration to engage with research methodology as construct at a later stage) in addition to research-related support required from different stakeholders. Response categories encompassed e-learning support, coaching, mentoring and supplementary training. Per se, respondents could list the environmental support that they would require to increase transfer of learning. Both positive- and negative-wording items were included in the construction of measuring instrument to decrease the probability of acquiescence. Colosi (2005) stated that reverse-scored items probe for acquiescence, which could be defined as a tendency to agree with a statement without considering the content of the item. An example of a negative-wording question included the following: ‘respondents do not have to be informed about the aim of the data gathering process’ and ‘the rule of thumb with reference to sample size is the smaller the population, the less respondents you include in the sample’. An example of a positive-wording question is as follows: ‘the research paradigm provides an indication of the research design that will be implemented in the study’ and ‘Cronbach’s alpha coefficient refers to the inter-item correlation of the measuring instrument’.

In terms of the psychometric properties of the measuring instrument, a principal component, oblique rotation reduced 28 items to two factors verified by confirmatory factor analysis. Per se, the Monte Carlo parallel analysis found that two components had eigenvalues exceeding the corresponding criterion value for a randomly generated data matrix of a similar size (Jonck et al. 2018:6). Hence, the construct validity of the measuring instrument was validated. Cronbach’s alpha coefficient underscoring covariance was calculated to determine the instrument’s reliability referring to the ability to, given similar sample characteristics, consistently yield the same results over various iterations (Cohen, Manion & Morrison 2007). In the current study, the Cronbach’s alpha for 28 items was 0.78. The obtained Cronbach’s alpha was marginally lower than 0.88 obtained by Jonck et al. (2018). Delineating further, the internal consistency per group was 0.79 for the pre-test group; for the post-test, the internal consistency increased to 0.83, and the comparison group reported a reliability coefficient of 0.72. Naidoo, Abarantyne and Rugimbana (2019) highlighted that alpha coefficients of 0.70 and higher are deemed acceptable in social science research. In light of the above-mentioned Cronbach’s Alpha values, the measuring instrument is deemed reliable for both intervention and comparison groups.

In accordance with the quasi-experimental research design, data collection with specific reference to the intervention group took place prior to and after the 2-day training intervention. Additionally, the research study encompassing the data collection process was utilised as an example in the quantitative unit of training intervention (Jonck et al. 2018). As single-difference impact estimates were used with reference to the comparison group, the respondents were requested to complete the measuring instrument after alternative training intervention. As to the control group, in the absence of a training intervention, the measuring instrument was administered once without repeat measures (Sachane, Bezuidenhout & Botha 2018).

The previously elaborated structured and self-developed questionnaire was administered in English to intervention, control and comparison groups by facilitators in person. Moreover, the interventions, both treatment and alternative, were carried out by the same facilitator in English. Even though English is the official language used at the workplace and the majority of respondents had national senior certificates (Weihs & Meyer-Weitz 2016), which is an employment requirement, it is acknowledged that language barriers might have had an influence on the reported research.

Data analysis

Completed questionnaires were coded, captured, verified and cleaned in Excel, after which the Excel spreadsheet was exported to the Statistical Package for Social Sciences (SPSS) version 20.0, utilised for statistical analysis. Descriptive analyses included frequencies, and percentage values were utilised to provide a biographical profile of the sample. The same for research methodology knowledge (i.e. as measured in Section B) subsumed the mean score, standard deviation (SD) and standard mean error. Reliability tests included Cronbach’s alpha, and factor analysis was computed to discover patterns amongst variations in values, that is, to confirm the validity of the instrument (Bouwan & Ling 2006; Ho 2006). A paired-sample t-test was performed to verify that an increase in research methodology knowledge ensued after the intervention. Henceforth, an analysis of variance (ANOVA) was used to establish whether differences existed amongst intervention, control and comparison groups relating to mean scores of research methodology knowledge. Post-hoc Tukey’s Honest Significant Difference (HSD) test was used to find statistical significance in mean scores between and within groups (Pallant 2011). Independent sample t-tests were performed to control for research-related previous training (viz. ethics in research) and facilitator effect. Whilst a one-way ANOVA was performed to determine whether either qualification or topic engagement statistically significantly influenced the research methodology knowledge, based on the independent sample t-test outcome with reference to facilitator effect, a hierarchical multiple regression analysis was performed to determine the influence of intervention after entering facilitator effect into the equation. Thus, hierarchical multiple regression analysis was used to determine the amount of variance in research methodology knowledge explained by the intervention itself after entering the facilitator effect into the first model (Geldenhuys & Henn 2017).

Ethical considerations

Standard ethical protocol was observed (Wagner, Kawulich & Garner 2012) in this study. Each questionnaire was accompanied by an instructional page clarifying the purpose of the study, promising confidentiality, indicating the importance of participation and potential benefits. Respondents were informed that participation was voluntary and that it could be suspended at any stage without reprisal. Care was taken to ensure anonymity and to prevent any detrimental effects on participants (Marembo & Chinyamurindi 2018). The questionnaire was distributed physically by facilitators to participating groups (i.e. intervention, control and comparison groups).

Limitations

The caveats acknowledged in this study include the utilisation of single-difference impact estimates to compare intervention and comparison groups in terms of research methodology knowledge, whilst a difference-in-difference method might have been more beneficial to control for selection bias (White & Sabarwal 2014). It is recommended that a difference-in-difference method should be utilised in the future research endeavours. Furthermore, the pre-test–post-test design is subject to testing threat and should be taken into consideration when interpreting results. However, Marsden and Torgerson (2012:585) suggested mitigating test threat with a Solomon group design (viz. utilising various groups comprising pre-test–post-test with treatment intervention, post-test no intervention and post-test only with alternative intervention), which was implemented in the study under discussion. Another limitation of the study encompasses results being based on a small convenience sample representing national and provincial spheres of government excluding local government. Hence, the findings could not be generalised to public service as a population. Despite the aforementioned limitations, it is considered that the aim of the study did not require a representative sample as it aimed to report findings within the scope of the selected sample and training intervention.

Results

Descriptive statistics comprising mean score, SD and mean standard error for research methodology are reported in Table 1. The importance of the above-mentioned statistics is to determine the status quo for research methodology knowledge in a sample of public servants with reference to the three groups, that is, intervention, control and comparison groups.

TABLE 1: Descriptive statistics for research methodology knowledge.

Table 1 shows that the mean score for the intervention group (mean = 51.061; SD = 14.387) was higher than that of both control group (mean = 58.364; SD = 7.638) and comparison group (mean = 60.333; SD = 5.899) interpreted in light of the measuring instrument being coded from positive to negative. Notably, single-difference impact estimates were used to calculate the aforementioned values.

A paired-sample t-test was performed to determine whether the intervention had a statistically significant effect on research methodology knowledge in the intervention group (see Table 2) whilst controlling for the influence of both control and comparison groups by means of ANOVA. Thus, ANOVA was used to establish whether differences existed amongst the intervention, control and comparison groups relating to mean scores for research methodology knowledge (see Table 3).

TABLE 2: Paired-sample t-test for research methodology in the intervention group.
TABLE 3: Analysis of variance comparison between and within intervention, control and comparison groups.

The results presented in Table 2 indicate that there was a statistically significant decrease on the 99th percentile of respondents’ knowledge from the first iteration (mean = 52.323; SD = 12.638; t = 41.813; p ≤ 0.000) to the second iteration (mean = 52.313; SD = 12.825 t = 41.198; p ≤ 0.000) taking into consideration that the measuring instrument was coded from positive to negative. The mean decrease in knowledge was 3.11, with a 95% confidence interval (CI) ranging from 49.841 to 54.805 in the first iteration and 49.794 to 54.832 in the second iteration. The ŋ2 statistics indicated a large effect (0.96).

To control for the influence of control and comparison groups, an ANOVA was performed (as shown in Table 3).

As per Table 3, statistically significant differences on the 99th percentile were found between and within the groups. Post-hoc comparisons using the Tukey’s HSD test indicated that the mean score for the intervention group was statistically significantly different on 95th percentile from both control (mean difference = -7.302; p ≤ 0.047) and comparison (mean difference = -9.271; p ≤ 0.026) groups. The mean score of the control group only statistically significantly varied from the intervention group (mean difference = -7.302; p ≤ 0.047) but not the comparison group (mean difference = -1.969; p ≤ 0.882). In the same vein, the comparison group statistically significantly differed from the intervention group (mean difference = -9.271; p ≤ 0.026) but not the control group (mean difference = -1.969; p ≤ 0.882).

To determine whether previous training and facilitator effect had statistically significant influence on research methodology knowledge, independent sample t-tests were conducted as depicted in Tables 4 and 5.

TABLE 4: Independent t-test of previous training on research methodology knowledge.
TABLE 5: Independent t-test of facilitator effect on research methodology knowledge.

An independent sample t-test was performed to determine the influence of research-related previous training, such as ethics in research (see Table 4). Ascribed to the fact that the Levene’s test of significance did not yield a statistically significant result (f = 0.055; p = 0.815), equal variances were assumed. According to Table 4, previous training did not statistically significantly influence the research methodology knowledge.

Table 5 reports the results of independent samples t-tests investigating the influence of facilitator effect on research methodology knowledge. As the Levene’s test of significance returned a statistically significant result (f = 15.875; p = 0.000), equal variances were not assumed. A significant difference was found, based on facilitator effect, in research methodology knowledge. As such, respondents with facilitation (viz. intervention and comparison group respondents) had a higher mean score than those who did not have any facilitation (viz. control group respondents were not exposed to an intervention, thus no facilitation).

Pursuantly, one-way ANOVA was used to determine whether qualification and topic engagement had a statistically significant influence on respondents’ research methodology knowledge (as shown in Table 6). The results indicated that neither qualification nor topic engagement statistically significantly influenced the research methodology knowledge.

Lastly, a hierarchical regression analysis was performed to determine the percentage of variance explained by the intervention whilst controlling for facilitator effect. Table 7 shows the results of regression analysis.

TABLE 6: Analysis of variance comparison between and within groups for selected variables.
TABLE 7: Variances explained by intervention whilst controlling for facilitator effect.

Table 7 shows that facilitator effect explained 9.4% of variance in research methodology knowledge. Furthermore, the treatment itself explained an additional 5.3% above that of the facilitator effect. Both variables had a statistically significant influence on research methodology. Facilitator effect had a statistically significant effect on the 99th percentile, whilst the treatment had the same effect on the 95th percentile.

Discussion

The objective of this study was to determine whether a research methodology training intervention resulted in a statistically significant increase in research methodology knowledge whilst controlling for research-related previous training, topic engagement and facilitator effect which hypothetically might have a statistically significant influence on the reported training outcome.

Both training intervention and facilitator effect had a statistically significant influence on increase in the reported research methodology knowledge. The statistically significant differences on mean scores were found between the intervention and both the control and comparison groups, confirming the knowledge increase because of treatment intervention. The fact that the mean score for the control group statistically significantly varied from the intervention group but not from the comparison group further supports the notion that knowledge increase was the result of training treatment for the intervention group. This is further confirmed by the fact that the comparison group statistically significantly differed from the intervention group but not from the control group. As noted, a difference-in-difference design might have been beneficial (see the ‘Limitations’ section); however, the respondents were public servants, permanently employed, currently engaged in postgraduate studies and thus assumed to be comparable. Furthermore, the treatment itself explained an additional 5.3% above that of the facilitator effect (reported below). This finding is to be expected and it supports the supposition that research endeavours should investigate specific input factors such as facilitator effect and the type of learning programme on learning transfer (Nikandrou et al. 2009). The findings of the study furthermore confirm the results by Jonck et al. (2018), reporting a statistically significant increase in research methodology knowledge because of skills development intervention.

Facilitator effect caused a statistically significant difference in research methodology knowledge as evidenced by the fact that respondents who received a facilitated skills development intervention had a higher mean score than that of those who did not have any facilitation (i.e. control group respondents were not exposed to treatment). Notably, the facilitators were constant throughout the study; thus, the same two facilitators facilitated both treatment and alternative skills development workshops. The research further revealed that 9.4% of variance in the reported research methodology knowledge could be ascribed to facilitator effect. This finding confirms various research findings indicating that qualities and characteristics of facilitators might influence adult learning (Nesbit, Leach & Foley as cited in Foley 2004; Knowles, Holton & Swanson 2005; Miller 1987).

Analyses of variance test revealed that neither qualification nor topic engagement statistically significantly influenced the research methodology knowledge. These findings refute the findings of Noe et al. (2010), who found that topic engagement has a statistically significant influence on the outcome of a skills development intervention. The finding related to qualification also contradicts Hailikari et al.’s (2008) finding, who reported that students who retain relevant prior knowledge from previous training were likely to perform better on the future related courses.

The significance of the research reported in this study centres on the empirical results pertaining to the impact of the skills development intervention, as well as the input factors shown to influence the outcome of training interventions, which could enhance the effective utilisation of limited training resources. It further encourages the usage of knowledge tests as an assessment methodology to validate a hypothesised knowledge increase resulting from a training intervention.

Conclusion

This research study was prompted by the paucity of evaluation studies that accentuated the impact of training. The article reported on a quasi-experimental evaluation focusing on a research methodology skills capacity workshop in the public service. A quasi-experimental pre-test–post-test research design was adopted in which comparison groups were employed to evaluate the impact of a research methodology skills development intervention. The study results indicated that training intervention had a statistically significant impact on research methodology knowledge. Furthermore, single-difference impact estimates indicated that the intervention group statistically significantly differed from the control and comparison groups. Facilitator effect was found to be an extraneous variable (viz. variable that hypothetically influences the dependent variable, however, not the independent variable under investigation). The study augments the corpus of knowledge by providing evidence of training impact within the South African public service with the specific utilisation of a quasi-experimental research design.

Acknowledgements

The authors would like to recognise the participants who took part in the training intervention.

Competing interests

The authors have declared that no competing interests exist.

Authors’ contributions

P.J. conceptualised the article and was responsible for developing the measuring instrument, data analysis as well as writing the article, thus contributing 70% towards the manuscript. R.d.C. assisted with data collection as well as compiling the article, thus contributing 30% towards the manuscript. Both authors were involved with the facilitation of workshop.

Funding infromation

Funding was received in part from the Global Innovative Forefront Talent (GIFT) Research Entity and hereby acknowledged.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

Bouwan, G.D. & Ling, R., 2006, The research process, 5th edn., Oxford University Press, Singapore.

Brutus, S. & Donia, S., 2010, ‘Improving the effectiveness of students in groups with a centralized peer evaluation system’, Academy of Management Learning and Education 9(4), 652–662. https://doi.org/10.5465/amle.9.4.zqr652

Cohen, L., Manion, L. & Morrison, K., 2007, Research methods in education, 5th edn., Routledge, London.

Colosi, R., 2005, ‘Negatively worded questions cause respondent confusion’, Paper presented in ASA Section on Survey Research Methods, Annual Meeting, American Statistical Association, August 7–11, 2005, pp. 2896–2903, Minneapolis, MN.

Dass, M. & Abbott, M., 2008, ‘Modelling new public management in an Asian context: Public sector reform in Malaysia’, The Asia Pacific Journal of Public Administration 30(1), 59–82. https://doi.org/10.1080/23276665.2008.10779343

Department of Higher Education and Training, 2011, National skills development strategy III, Department of Higher Education and Training, Cape Town.

Dlomo, P.A., 2017, ‘The impact of irregular expenditure in the South African public finance with specific reference to the National Department of Public Works’, Unpublished Master’s dissertation, Cape Peninsula University, Cape Town.

Edmonds, W.A. & Kennedy, T.D., 2017, An applied guide to research designs: Quantitative, qualitative, and mixed methods, Sage, London.

Fjuk, A. & Kvale, K., 2018, ‘Developing managerial dynamic capabilities: A quasi-experimental field study of the effects of design thinking training’, Academy of Management Learning and Education 17(2), 184–202. https://doi.org/10.5465/amle.2016.0187

Foley, G., 2004, Dimensions of adult learning, Open University Press, Berkshire.

Geldenhuys, M. & Henn, C.M., 2017, ‘The relationship between demographic variables and well-being of women in South African workplaces’, South African Journal of Human Resource Management 15(0), a683, viewed 14 May 2019, from https://doi.org/10.4102/sajhrm.v15i0.683

Hailikari, T., Katajavuori, N. & Lindblom-Ylanne, S., 2008, ‘The relevance of prior knowledge in learning and instructional design’, American Journal of Pharmaceutical Education 72(5), 113, viewed 15 May 2019, from https://www.ajpe.org/doi/full/10.5688/aj7205113

Ho, R., 2006, Handbook of univariate and multivariate data analysis and interpretation with SPSS, Chapman and Hall, New York.

Jaimovich, E. & Rud, J.P., 2014, ‘Excessive public employment and rent-seeking traps’, Journal of Development Economics 106, 144–155. https://doi.org/10.1016/j.jdeveco.2013.09.007

Jasson, C.C. & Govender, C.M., 2017, ‘Measuring return on investment and risk in training: A business training evaluation model for managers and leaders’, Acta Commerci – Independent Research Journal in Management Sciences 17(1), a401. https://doi.org/10.4102/ac.v17i1

Johanson, R. & Adams, A. (eds.), 2004, Skills development in sub-Saharan Africa, World Bank, Washington, DC.

Jonck, P., De Coning, R. & Radikonyana, P., 2018, ‘A micro-level outcomes evaluation of a skills capacity intervention within the South African public service: Towards an impact evaluation’, South African Journal of Human Resource Management 16(0), 1–9. https://doi.org/10.4102/sajhrm.v16i0.1000

Kahn, W.A., 1990, ‘Psychological conditions of personal engagement and disengagement at work’, Academy of Management Journal 33(4), 692–724. https://doi.org/10.5465/256287

Kirkpatrick, D. & Kirkpatrick, J., 2006, Evaluating training programs, 5th edn., Berrett-Koehler, San Francisco, CA.

Knowles, M.S., Holton, E.F. & Swanson, R.A., 2005, The adult learner: The definitive classic in adult education and human resource development, Taylor & Francis, Boston, MA.

Marembo, M. & Chinyamurindi, W.T., 2018, ‘Impact of demographic variables on emotional intelligence levels amongst a sample of early career academics at a South African higher education institution’, South African Journal of Human Resource Management 16(0), a1051, viewed 02 May 2019, from https://doi.org/10.4102/sajhrm.v16i0.105

Marock, C., Harrison-Train, C., Soobrayan, B. & Gunthorpe, J., 2008, SETA review, Development Policy Research Unit, University of Cape Town, Cape Town.

Marsden, E. & Torgerson, C.J., 2012, ‘Single group, pre- and post-test research designs: Some methodological concerns’, Oxford Review of Education 38(5), 583–616. https://doi.org/10.1080/03054985.2012.731208

Mclaverty, L., 2007, ‘Public administration research in South Africa: An assessment of journal articles in Journal of Administration & Administratio Publica from 1994–2006’, Master’s thesis, Dept of Public Administration, University of Cape Town, Cape Town.

Miller, P., 1987, ‘Ten characteristics of a good teacher’, Reflections 25 (1), 36–38.

Mooney, T. & Brinkerhof, O., 2008, Courageous training, Berrett-Koehler, San Francisco, CA.

Naidoo, V., Abarantyne, I. & Rugimbana, R., 2019, ‘The impact of psychological contracts on employee engagement at a university of technology’, South African Journal of Human Resource Management 17(0), a1039, viewed 25 April 2019, from https://sajhrm.co.za/index.php/sajhrm/article/view/1039

National Treasury, 2018, Estimates of national expenditure 2018 abridged version, National Treasury, Pretoria.

Nikandrou, I., Brinia, V. & Bereri, E., 2009, ‘Trainee perceptions of training transfer: An empirical analysis’, Journal of European Industrial Training 33(3), 255–270. https://doi.org/10.1108/03090590910950604

Noe, R.A., Tews, M.J. & Dachner, A.M., 2010, ‘Learner Engagement: A new perspective for enhancing our understanding of learner motivation and workplace learning’, The Academy of Management Annals 4(1), 279–315. https://doi.org/10.5465/19416520.2010.493286

Pallant, J., 2011, SPSS survival manual: A step by step guide to data analysis using SPSS, 4th edn., Allen and Unwin, Crows Nest.

Parliamentary Monitoring Group on Public Service and Administration, Performance Monitoring and Evaluation, 2017, Prohibition on public servants doing business with State: Financial disclosures; national school of government funding model, viewed 15 February 2019, from https://pmg.org.za/committee-meeting/24616/.

Pillay, P., Juan, A. & Twalo, T., 2012, Impact assessment of national skills development strategy II: Measuring impact assessment of skills development on service delivery in government departments, Human Sciences Research Council (HSRC), Pretoria.

Rogers, P.J., 2012, Introduction to impact evaluation, The Rockefeller Foundation, Washington, DC.

Rogers, P.J., 2014, Methodological brief, no. 1: Overview of impact evaluation, United Nations Children’s Fund (UNICEF), Florence.

Sachane, M., Bezuidenhout, A. & Botha, C., 2018, ‘Factors that influence employee perceptions about performance management at Statistics South Africa’, South African Journal of Human Resource Management 16(0), a986, viewed 15 March 2019, from https://doi.org/10.4102/sajhrm.v16i0.986

Schaufeli, W.B. & Bakker, A.B., 2004, ‘Job demands, job resources and their relationship with burnout and engagement: A multi-sample study’, Journal of Organisational Behaviour 25, 293–315. https://doi.org/10.1002/job.248

Shannonhouse, L., Lin, Y.D., Shaw, K. & Porter, M., 2017, ‘Suicide intervention training for K-12 schools: A quasi-experimental study on ASIST’, Journal of Counseling and Development 94, 3–13. https://doi.org/10.1002/jcad.12112

Wagner, C., Kawulich, B. & Garner, M., 2012, Doing social research: A global context, McGraw-Hill, Berkshire.

Weihs, M. & Meyer-Weitz, A., 2016, ‘Do employees participate in workplace HIV testing just to win a lottery prize?: A quantitative study’, South African Journal of Human Resource Management 14(1), a722, viewed 22 April 2019, from https://doi.org/10.4102/sajhrm.v14i1.722

White, H. & Sabarwal, S., 2014, Quasi-experimental design and methods. Methodological briefs, impact evaluation no. 8, United Nations Children’s Fund (UNICEF), Florence.



Crossref Citations

No related citations found.