The Times Educational Supplement recently included an in-depth look at GCSE results and the subject of how we use what has become known as comparable outcomes, asking whether rising results are now a thing of the past.
I am pleased to see the debate. The issues are so important - and contentious. We all have a stake in them and the more we can talk about them, the better, especially as we prepare for major reform of GCSEs and A levels.
We are talking about the performance standard in a qualification, or where the bar is set. There is no one, universal or common way of doing it, and so we cannot rely on how things are done the world over. On the contrary, most approaches used elsewhere would not be deemed appropriate here.
So how do we make sure the bar is set at the right height, consistently? And how should we do it? Let's get a few myths out of the way, first of all.
There is no political interference. I don’t know how things were in the past, before Ofqual existed, though I have seen no evidence that previous Governments have interfered and I don’t believe they did. It is now our job to ensure standards and we do that without any instruction or interference from Government. I expect that’s hard for many to believe, and I can see why, but that is what Parliament intended when it established Ofqual as an independent regulator. We haven’t been leant on by Government, or anyone else, to take standards this way or that.
We do not use ‘norm-referencing’. Under that approach, the same proportion of students would achieve the same results, year on year. It has some attractions – it can be relatively simple to operate, and some regard it as the clearest and most defensible approach to setting standards. But it doesn't allow for changes in the student mix, or for improvements in true attainment. We don’t discount norm-referencing as an approach, but we don’t use it.
We do not cap the numbers of students achieving particular grades. There is no quota, and we do not issue blanket bans to prevent improvement. But where proposed results change significantly from one year to the next, we need to be satisfied that there is a good reason for that, otherwise confidence is undermined.
We do use an approach which prioritises comparable outcomes to manage changes in qualifications. When new specifications come in for established qualifications, results can fall just because teachers are not familiar with the new specification. To be fair to students from one year to the next, we expect exam boards to use statistical modelling to ensure that results are comparable, year on year, despite the change in the specification. We set out our expectations to exam boards, ahead of awarding.
We and exam boards use statistical modelling. Once a new qualification has settled down we want to know as far as possible that any changes in results are because of true changes in attainment and not just unexplained grade creep, which can happen. That is easier said than done. Exam boards use predictions to guide awarding. We set tolerances around these predictions, so results can go up or down (and they do), but we expect exam boards to provide credible evidence for being outside of tolerance, either way.
Statistics are used in the awarding process to set out expectations – they are used alongside examiner judgement.
All the available evidence is used by exam boards to set the bar for each subject and each qualification.
Nevertheless, people are suspicious about the use of statistics, and those in schools will be particularly concerned after GCSE English last year.
So, what can we do then to ease those concerns? One thing we can do is to be as open as possible about what we and exam boards do, how we do it and why we do it. You can find out more about marking and grading and our approach to setting standards here.
I recognise that the system isn’t perfect. There is no perfect system. We need to work at making sure ours is the best it can be. Exam boards may not always be able to provide credible evidence of improvement when improvement has happened, and here we are reliant as ever on examiner judgement, so important and central to awarding.
We need to make sure that the system enables examiner judgement and credible evidence to feed through as well as can be. And we want to take every opportunity to improve and expand the data sets that we and exam boards rely on, to make sure our predictions, our expectations are as good as they can be. The more data and information we have to support standard setting, the better.
We will discuss and debate about how standards are set and maintained in forthcoming qualifications, new GCSEs and A levels. We will start that debate in the autumn. For now, we are intent on making sure that those involved in schools and colleges, leaders, teachers and students understand how it is done now, and can see it working openly, and as it should.
Glenys Stacey
Chief Regulator