Using Comparative Judgement to assess oracy: something to shout about?
Photo: Patrick Fore /Unsplash

Oracy is the ability to articulate ideas, develop understanding and engage with others through spoken language.

In school, it is a powerful learning tool.

Teaching students to become more effective speakers and listeners empowers them to better understand themselves, each other and the world around them.

Oracy is also a route to social mobility, empowering all, not just some, to find their voice for success in school and life.

Around the UK, hundreds of schools see it as a priority, so why is it not more valued in the heart of the education system?

It could be that there are other issues taking precedence such as staffing levels, budgetary demands or timetabling pressures.

Another reason could be that there is no national, standardised test for oracy.

Beyond the Early Years, we have no national statistics for oracy unlike literacy and numeracy.

As a result, teachers cannot reliably understand student attainment and progress in comparison to the national picture.

Those monitoring schools, whether in government, local authorities or MATs, lack the evidence needed to identify areas where support is needed or measure the effect of that support.

A key reason for lack of standardised testing is that historically, oracy has been considered “too subjective” to assess, particularly in high-stakes exams.

We have been looking at a solution for these issues with a pilot project using comparative judgement to assess and reliably score students.

Students are filmed completing a spoken assessment task, then those videos are shared with a nation-wide network of assessors – familiar with the Oracy Framework – who compare multiple videos , multiple times.

Videos are judged in pairs against a simple statement such as “choose the best explanation.”

This human assessment is fed into specialist software which combines them to create a rank order of all the performances

When assessing oracy, comparative judgement offers two significant benefits that help us reliably grade/score student performances.

First, it avoids the challenging task of describing which levels of performance should receive which marks. Instead of being compared to mark schemes, performances are ranked against each other.

Second, it tackles “subjectivity.”

Because each performance is assessed by multiple judges before the final rank is generated, moderation is in effect built in.

Our initial trials in around thirty schools used assessment tasks developed specifically for use with comparative judgement.

This approach yielded more reliable rankings of student work than we could have gained through existing methods.

The trials’ success led to a larger-scale pilot with Year 5 students which generated a national sample of oracy performances and informed our latest Impact Report.

It showed that students in our longest-standing oracy schools returned the higher performances.

There was a 50% increase in scores between students in first year oracy schools and those in fourth year oracy schools.

These findings will not only enable us to provide schools with much clearer guidance on what ‘good’ looks like, but also develop an assessment tool for schools to measure oracy attainment and progress in a national context.

This will raise the bar for oracy education by letting teachers and students see what “great” looks like and work towards that.

We believe this approach could be a game changer.

If comparative judgement is proven reliable and fair for assessing oracy, it enables an evidence-informed approach at all levels.

Teachers will be able to target support to those who need it most and ensure students progress as they move up the school.

Schools, trusts, local authorities and central government can identify and nurture best practice.

And students will be able to understand clearly how their oracy abilities are developing and value them as skills to be honed rather than fixed attributes.

Some major challenges remain to the widespread use of comparative judgement. When done on a large scale, it can be resource intensive, taking up lots of teachers’ valuable time.

However, Voice 21 is committed to working to develop the methodology and technology that is both robust and time efficient.

We think a new oracy assessment, powered by comparative judgement, is worth exploring as a way of ensuring every child receives the high-quality oracy education to which they are entitled.

Amanda Moorghen is Head of Learning, Impact and Policy at Voice 21 UK.
She has an MSc in Social Policy and Social Research from UCL’s Institute of Education.

As part of AQi’s work, we are inviting people from a wide range of viewpoints to engage with us on a wide range of topics. We welcome alternative views to help stimulate discussion and ideas. The views of external contributors do not necessarily represent the views of AQi or AQA.

Labour's oracy plans: They need clear goals

Comparative Judgement: The Pros and Cons
The Baker Reforms: What is their legacy?