Mythbusting: 3 Common Misconceptions

Here are three things we often hear:

  1. GCSEs used to be criterion referenced.
  2. My school can’t improve because of Ofqual’s approach.
  3. There is a cap on the number of each grade awarded.

None of these is true.

“GCSEs used to be criterion referenced.”

Criterion referencing involves measuring a student’s performance against pre-determined criteria, or learning outcomes - written descriptions of what students should know and be able to do. For a qualification to be criterion referenced, the criteria would have to be very clear so that all those involved in assessing students had a common understanding of what was required.

In GCSE the closest thing we have to such criteria are grade descriptors. Here’s an example from the legacy GCSE maths grade descriptor for grade C:

“Learners use a range of mathematical techniques, terminology, diagrams and symbols consistently, appropriately and accurately. Learners are able to use different representations effectively and they recognise some equivalent representations for example, numerical, graphical and algebraic representations of linear functions; percentages, fractions and decimals. Their numerical skills are sound and they use a calculator accurately. They apply ideas of proportionality to numerical problems and use geometric properties of angles, lines and shapes.”

In a criterion-referenced qualification, each of these statements would need to be met. So a student who could not correctly answer the questions that required them to “use geometric proportions of angles, lines and shapes” would not achieve a grade C, no matter how many marks they scored on other questions.

Criterion referencing is in effect a series of mini hurdles that students have to get over. If they fall at one of them, they miss out on that qualification, or that grade.

That’s why GCSEs have never been criterion referenced. GCSEs are ‘compensatory’ qualifications so better performance in one area can compensate for poorer performance in another. Grade descriptors have always been used in a ‘best fit’ sense (what some have called ‘weak criterion referencing’) rather than as absolute criteria to judge performance, and statistics have always helped to guide senior examiner judgments. If we were to use grade descriptors as absolute criteria, the percentage of students who had met all the criteria to achieve grade C and above would have been very low.

It’s also worth saying that GCSEs and A levels are not norm-referenced either. If they were, we’d see fixed proportions of each grade in each specification, regardless of the ability of the cohort or the ability profile of any one exam board.

“My school can’t improve because of Ofqual’s approach.”

Some schools and colleges have very stable year-on-year results. But we also know from the analyses we’ve published in recent years that some variation in year-on year results is normal. This can be due to many different factors, including differences in the ability mix of the students, different teaching approaches, changes in teaching staff or teaching time, and changes to qualifications.

Our approach to maintaining standards does not prevent schools from improving their own results even when national results remain steady. If exam boards have evidence to support higher (or lower) national results than those predicted, we will consider that evidence. We use reporting tolerances that vary with the size of entry. So exam boards can vary their results (relative to predictions) by up to 1, 2 and 3% without needing to provide us with additional evidence. In general, we wouldn’t expect to see large changes in the proportion of each grade awarded year on year in high-volume subjects but nor do we see large changes in the small entry subjects where exam boards rely much more on examiner judgement to set grade boundaries. In some specifications, the entry numbers are relatively small and there are no reporting tolerances. Each year, we accept around 30 of these cases.

Recent research from Cambridge Assessment refutes the idea that our approach prevents individual schools from improving. It concludes that individual schools and colleges can still demonstrate improvement because in most cases the impact on individual schools of lowering a grade boundary is relatively small.

“There is a cap on the number of students that can be awarded each grade.”

Exam boards use prior attainment data to predict the likely percentages of each grade to be awarded in a specification. In most cases, awards are reasonably close to prediction. But, as explained above, exam boards can come to us with evidence to support an award that is out of line with predictions in a particular subject, and this does happen in around 30 awards each year. So, while we would not normally expect results to be out of line with predictions, there is no fixed quota of each grade.

If you're not sure, ask

These are just some of the things we hear, but they’re not the only myths out there. Earlier this week, someone asked me whether the unreformed GCSEs (those still graded A* to G) were going to be more demanding this summer. They’re not. If you hear something you’re not sure about, then please feel free to ask us.

Phone us on 0300 303 3344.



Cath Jadhav
Associate Director, Standards and Comparability

Leave a comment