The importance of assessing what students can really do and why good assessment means accessible assessment.
England’s exams regulator, Ofqual, has published draft guidance for GCSE, AS and A-level exam boards in developing accessible assessments. The consultation puts a spotlight on one of the most important challenges for exam boards. But what is ‘accessible assessment’ and why is it important?
How to make assessments fair?
An assessment is only useful if the story it tells us about a student is the right one.
A maths exam, for example, should only reveal how good a student is at maths. If the questions asked mean that other factors, such as cultural experiences, inappropriately impact the student’s performance then construct irrelevant variance can occur.
As a result, the story the assessment tells about the student can be the wrong one.
One aspect of question design that can distort the validity of assessment is the use of context, which can (dis)advantage groups of students because of their different experiences of the world.
In some subjects, such as English Language or modern foreign languages, context is unavoidable because students have to read and respond to unseen texts and write texts of their own.
The challenge for the examiner is selecting texts that don’t assume inappropriate prior knowledge on the part of the student.
However, this is easier said than done.
Examiners are degree-educated adults that view the world through a particular lens. Like anyone, they can find it difficult to imagine all the life experiences of the thousands of students taking the exams.
How to measure what students can really do?
One area that the Ofqual consultation is focusing on is accessibility and removing unnecessary burdens on students sitting assessments. For example, ensuring question instructions are clear.
Whilst exam boards work hard to ensure questions are accessible to all students, there are a few examples where assessments have not been.
A question appeared on a GCSE English exam some years ago that asked students to write an article about whether participating in dangerous sports was irresponsible.
One teacher said that many of her pupils were stumped by what was meant by dangerous sports – even if they’d seen skiing on TV or knew that people climbed mountains, those activities just didn’t come to mind.
When the teacher asked the students what they thought a dangerous sport might be, they suggested that it could be related to guns. In a classroom, the teacher can explain given examples. However, in a pressured exam context, it privileges certain students with specific understandings that were not necessarily informed by the specification.
Similarly, maths exam questions use real world contexts to help the student ground the mathematical concept in a less abstract setting, providing some ‘mental scaffolding’ for the student to show what they can do.
Context can also hamper students’ attainment. An examiner might, for example, write a question based around a game of skittles. This is because, within their world experience, everyone knows what that is. Comparatively, to teenage students, skittles are a colourful fruit sweet, not a game, so the unfamiliar context creates an unnecessary barrier.
Even familiar contexts can cause problems as some students, when faced with a real-world problem, may try to solve it with a real-world solution.
For example, an exam question for primary age children which tested division skills by asking them to fairly share out bottles of Coke at a party resulted in students struggling to answer. This was because, in the real world, some students don’t like Coke, and they’d be unlikely to all drink an equal amount.
The ability to engage with a context depends on many factors, including cultural familiarity and language fluency. If any of those factors prevent a student from answering a maths question then that question is not a valid test of maths skills.
Why don’t exam boards pre-test questions?
When assessment is used in other sectors, questions that unintentionally disadvantage particular groups of test-takers can be removed from tests in advance of them being taken.
This is because some jurisdictions’ assessments are subject to pre-testing, where questions are answered by groups of students in advance. The results of the pre-testing can then be subject to ‘differential item function’ (DIF) analysis, which shows if students with a particular demographic characteristic are systematically under-performing on a question.
However, a pre-testing system wouldn’t be pragmatic in our current system, given the cost and complexity that would be involved in pre-testing all items.
Therefore, exam boards rely on how different students perform in assessments after they have been taken. Exam boards can then improve questions and assessment design year by year.
Those who design GCSE, AS and A-level assessments also need to ensure focus on designing in fairness upfront, continuing to make sure examiners and exam board staff all understand the importance of writing questions that don’t assume shared knowledge that is only shared by some.
Ofqual’s consultation provides an opportunity to reflect on how we in the assessment world can make sure that our assessments tell the true story about a student’s capabilities.
If assessments are not easily understood by all learners, we run the risk of assessing the wrong skills and unfairly disadvantaging some students.
Assessments need to be as accessible as possible to all students, to allow students a fair chance to show what they know and can do.