Skip to main content

Blog The Ofqual blog

Organisations:
Ofqual

https://ofqual.blog.gov.uk/2019/10/01/how-not-to-compare-standards-between-exam-boards/

How (not) to compare standards between exam boards

Posted by: , Posted on: - Categories: A levels and GCSEs, Exams

From time to time we see suggestions that one exam board’s GCSE is easier than another, most often because one board has a higher percentage of students achieving high grades. I’d like to explain why simply comparing the grade profiles of exam boards is a really poor way to measure comparability in a subject. And then I’d like to talk about a much better measure, and one that we use and report on each year.

Four teenage girls collecting exam results

Exam board specifications in a subject are not all the same. The subject content is set by Department for Education; we set rules about how that content should be assessed and the relative weighting of exam and non-exam assessment in some subjects, but that does leave some room for some variation in approach. In accrediting the reformed specifications, we aimed to make sure that the small differences in approach did not undermine our ability to align exam board standards in a subject. That gives teachers some flexibility to choose the specification that they would most enjoy teaching and which they believe would best suit their students.

Depending on the choices schools and colleges make about which specification to use, exam boards can have very different entries, in terms of the size and the ability of their cohorts. If each board has a similar ability profile of students (as indicated by, for example, Key Stage 2 or GCSE prior attainment), then we would expect to see similar grade profiles between the boards. But if one board’s entry has a greater proportion of higher-ability students, then it makes sense that that board will have a greater proportion of high grades. That’s what we see in practice, as exam boards’ entries in a subject are not equal in size or ability profile. In those circumstances, imposing the same grade profile on each board would be unfair, as it would disadvantage those schools who happen to choose the board that has a greater proportion of higher-ability students.

We believe that the search for the ‘easiest’ exam board is misguided. The statistical predictions that the boards use to guide their awards are a tool we use to make sure that exam boards’ standards in a subject are aligned. These predictions use students’ prior attainment (Key Stage 2 attainment to predict GCSE, and GCSE attainment to predict AS and A level), aggregated to produce a national prediction matrix for each subject. When each board uses that to generate its own prediction for GCSE English language or A level history for example, that prediction will reflect the prior attainment of that board’s entry. If one board has an entry where students tend to have higher than average prior attainment, its prediction for the top grades will tend to be higher than other boards. That doesn’t mean one board’s specification is easier than another: it’s simply a reflection of the students entering with that board.

Of course, exam boards can present evidence to us to support awards that produce outcomes that differ from those predicted by the cohort’s prior attainment. But in considering these awards we always take a cross-board perspective to ensure comparability between boards within a year.

So when you see or hear a debate about which board’s specification is ‘harder’ or ‘easier’ than the next, please bear this in mind. Exam boards use predictions to ensure grade standards are comparable, so that it is no easier or harder to achieve a particular grade with one exam board than with another. Each exam board’s prediction will reflect the prior attainment profile of that board’s students, meaning outcomes by boards may vary, but standards do not. Teachers can, therefore, feel free to choose the specification that best suits their students without worrying that their choice could mean losing out on top grades.

Sharing and comments

Share this page

2 comments

  1. Comment by venice posted on

    Comparing eduqas, aqa and Pearson for a-level art grade boundaries, it appears that Pearson's requirements for each grade are significantly lower in percentage than the other two. Is this correct and fair? How can that be?

    • Replies to venice>

      Comment by Ian Dexter posted on

      Many thanks for your comment.

      To ensure results are fair, grade boundaries are set to reflect the demand of each assessment, which may vary from one year to another and between exam boards. Grade boundaries are therefore set after assessments have been taken, when the exam boards have evidence about how easy or difficult they were.

      Exam boards use data about the prior attainment of the students taking their assessments (in GCSEs or Key Stage 2 tests) to predict how they are likely to perform, and the judgements of senior examiners about the quality of student work when compared to previous years.

      The percentage of marks needed to get a particular grade in a subject might vary between exam boards. This reflects differences in the relative demand of the assessments for individual exam boards as well as differences in the marking criteria.