4.5 Testing English

Here is another extract from the chat sessions with Gazan English teachers. In this one, Nida, Rana, and Ayah respond to a question about plurilithic learning and monolithic testing.

Chris: OK, so if learners build their own version of English in their minds, and they get supervision from teachers so that their English actually works… then is it right to test their knowledge of ANOTHER version of English? (I mean the one in the textbooks.) […]
Nida: The version we should test them in is the version we supervised.
Khawla: […] [D]oes the supervision result in the SAME version as in the textbook? […]
Nida: it is supposed to // or at least to be similar […]
Rana: but the reality is different […]
Khawla: So, when it is different in reality, is it fair to test them keeping the textbook’s version as a standard
Rana: w[e]ll it is not fair at all […]
Ayah: no we musn’t take textbook as a source of our minds […]



activity  Activity

What are the essential points Nida, Rana, and Ayah make here?



  • Figure 4.5: Plurilithic learning but monolithic testing

    [Source: Clipart]



 Discussion point 4.2

The extent to which you agree or disagree with what Nida, Rana, and Ayah say will depend in part on how convincing you’ve found the arguments put forward for your consideration in this course. What’s your opinion? What would you challenge or add? Tell us what you think and respond to others here.

A major challenge to the conclusion reached by Nida, Rana, and Ayah would be that although testing a single variety is unfair, it simply isn’t viable to test the vastly diverse and endlessly shifting plurilithic forms of English that are actually out there or are possible. One might conclude that regional or ELF norms for English should be codified and tested (and this was a sympathetic position for Ayah, as you’ll see below). Another conclusion would be that ‘Standard English’ is the only viable option (recall the discussion of advantages and disadvantages in the first unit, Thinking about English).
A more radical proposal, more in line with the plurilithic conception of English explored here, is that what ought to be tested is not knowledge of the forms taught to learners, but the communicative effectiveness of learners as users, with forms taken into account only when relevant, and otherwise ignored (Troike, 1983; Jenkins, 2006; Hall, 2014). There is some recognition of this in some international tests of English. For example, work submitted for the IELTS writing assessment is marked according to four criteria, including ‘task response’. On this criterion, a piece of writing achieving the top score of 9:

    • ● ‘fully satisfies all the requirements of the task’ and
      ● ‘clearly presents a fully developed response’


But other criteria stress grammatical accuracy, and the speaking assessment does not include ‘task response’ at all. In line with our more radical proposal, perhaps ‘task response’ should be the only measure for testing ability in other contexts of use, especially oral ones, where language forms are not (as) relevant.

   In Depth

Plainly, current tests of English and the pivotal role they play in shaping global ELT represent a major obstacle to plurilithic approaches to learning and teaching. The prospects for change seem slim. In Jenkins and Leung’s (2017) review of work on ELF assessment, they observe that:

    • the major international English language examinations […] continue to assess candidates’ ability with reference to putative native English norms as if they would only be communicating with native English speakers, or nonnative English speakers who only regard standard native varieties as acceptable. The findings of empirical ELF research have thus had no influence to date on the goals of English language assessment and the kinds of English that the boards specify in their descriptors and accept as “correct.” (p. 5)

But there are some encouraging signs. The /courses/course/lessons/4-1-changing-roles-for-teachers/3.4-learning-contexts.phpCommon European Framework of Reference (CEFR) is an increasingly well-known and widely-used standard for describing language ability, employed in curriculum, materials, and test design around the world. Its application to English has been criticised, however, for failing to recognise that the native-speaker norms are no longer always appropriate for test purposes (see references in Jenkins and Leung, 2017). In the new CEFR Companion Volume (CEFR-CV; Council of Europe, 2018), however, the notion of accuracy is (partially) recast in terms of appropriateness, and the revised assessment scales seem more sensitive to lingua franca usage. Harsch (2020, p. 175), for example, states:

    • Plurilingualism and interaction between speakers of different languages are at the heart of the European language policy, the original CEFR document, and the CEFR-CV. While mediation, interaction among plurilingual speakers, and translanguaging strategies were not operationalized in a stringent way in the original document, the CEFR-CV has successfully addressed this gap. Here, we find a clear focus on Lingua Franca contexts and a broad concept of mediating across languages […].

Slowly, perhaps, monolithic views of language are changing.