< Back to previous page


Language as a predictor of academic achievement. A validity argument of a low-stakes post-entry academic language proficiency screening test for first year university students.

The transition from secondary to tertiary education is often conceived as difficult, into a new culture with specific rules, conventions and language use (van Kalsbeek & Kuiken, 2014). In Flanders, where, except for Medicine and Dentistry, there are no entry requirements or selection mechanisms, the follow-up of incoming students is important. Success rates of first-year students are low, at the KU Leuven in 2015-2016 for example, less than 40% of the students obtained all of their credits (KU Leuven, 2016). Deygers (2017) even states that in this system, the first year can be considered as the de facto selection mechanism. The increasing diversity in student backgrounds, poses new challenges to help students to ‘acculturate’ to academic education (Glorieux, Laurijssen & Sobczyk, 2014b; Read 2015; Van Dyk, 2015; van Kalsbeek & Kuiken, 2014).

An intrinsic part of students’ background is their language ability. Not only international students struggle with the language requirement s of university education, many domestic students experience difficulties as well. This relates to one of the corollaries of Hulstijn that individual differences in language task performance will be relatively large, also among L1ers, when lexically, syntactically and cognitively more complex language (Higher Language Cognition) is involved (Hulstijn, 2011; Hulstijn, 2015). International research clearly shows that low language proficiency can be an inhibitor for academic achievement (Davies, 2007; Elder, Bright, Bennet, 2007; Elder, Erlam & van Randow, 2002; Graham, 1997; Hill, Storch & Lynch, 1999; Read 2015; Van Dyk, 2015). Although these results cannot be transferred to the educational context of Flanders without adjustments, the idea that language proficiency is a necessary attribute to study success has recently gained ground in Flanders (Bonne & Vrijders, 2016; Peters & Van Houtven, 2010; De Wachter, Heeren, Marx & Huyghe, 2013; Herelixka & Verhulst, 2014; Raad voor de Nederlandse Taal en Letteren, 2015). However, there has not been much research on this subject so far.

This study will use the results of the language proficiency screening test developed in the Encouragement Funds project Taalvaardig aan de Start (TaalVaST – 2009-2016), in which a language screening test and follow-up support for first year students was developed. Earlier research on this screening has already found a moderate correlation between the results of the language screening and students’ average exam score in January (De Wachter & Cuppens, 2010; De Wachter & Heeren, 2013; De Wachter et al., 2013). This doctoral study comprises of two parts: the first one investigates the internal test validity:


- Do the test items contribute to one overarching academic language construct?

- Does the use of metacognitive strategies correlate with the time in which a student finishes the screening test and does that time, in turn, relate the their screening test result?

- How do the screening test results relate to the results on the  ITNA-computer test?

The second part of the study investigates the effects of background variables (age, gender, SES, home language, school language, pre-university education and the average grade in high school) on the language screening test score. Next, the correlation with academic achievement will be examined, operationalized on the one hand as the average exam result in January and June and cumulative study-efficiency (CSE) in January and June on the other. Effects of other predictive variables will be examined as well. We will investigate the effect of different background variables on achievement.  

The research questions of the main study are:

- To what extent do (interactions between) background variables determine the screening test result and what is the role of SES?

- Can the correlation between language screening test and academic achievement that De Wachter et al. (2013) found, be confirmed?

- What are the differences in screening test score between different faculties, i.e. to what extent does the screening measure a ‘general’ academic language proficiency?

- What is the predictive value of the screening test, especially in relation to other known predictors?

- Related to the predictive value: can the instrument be considered valid and will it still be valid if its use was to be more high stakes?

This study contributes to the literature on post-entry academic language assessment (PELA)  (Knoch & Elder, 2013; Read, 2015) and the relation between language proficiency and academic achievement in general. Firstly, university education outside of Belgium and Flanders often has several pre-entry selection mechanisms and although the population in Belgian universities does not reflect the composition of society, it is more diverse than in most countries (De Wit, Van Petegem & De Maeyer, 2000; Glorieux, Laurijssen & Sobczyk, 2014a). Secondly, international research often focuses on international students, using admission tests. This can be problematic since researchers have to work with a truncated sample, since only the students that actually pass the language test can be investigated. In addition, some domestic students also struggle with academic language requirements. Thirdly, the large amount of background variables, combined with the size of the dataset (more than 11.000 students) allows for a very detailed statistical analysis. This is useful when validating the use of the test and relating its results to the current admission policy in Belgian universities. Despite the fact that the test is very low-stakes, it is necessary that it is validated, since thousands of students take that test each year at the start of the first year (Winke & Fei, 2008). Moreover, the test has found its way into several pre-university orientation instruments such as Columbus (columbus.onderwijskiezer.be) and the LUCI-platform (www.kuleuven.be/luci). The goal of this study is to construct a validity argument that investigates not only the validity of the instrument itself, but also its claims, uses and consequences (Bachman & Palmer, 2010; Kane, 1992; Kane, 2001; Kane 2013; Messick, 1989; Knoch & Elder, 2013; Read, 2015).

Date:13 Oct 2017  →  Today
Keywords:Post-entry Language Assessment, University
Disciplines:Education curriculum, Language studies, Literary studies, Linguistics, Theory and methodology of linguistics, Other languages and literary studies
Project type:PhD project