Every school and teacher wants to give their pupils the best chance of success when sitting their GCSEs or A levels. One choice they have is which exam board’s specification to adopt in each subject each year. As such, debates frequently develop on social media or at teaching events about current preferences, and in particular which specifications are considered ‘harder’ or ‘easier’ than others.
Naturally, not all exam board specifications are the same. Our regulations allow for differences between specifications as long as the appropriate amount of stipulated curriculum is covered and the assessment is valid and is sufficiently challenging. These variations allow teachers to choose the specification that they would most enjoy teaching and which they believe would best suit their pupils’ needs.
Now, I’m not giving anything away by saying it is impossible for examiners to set exam papers at precisely the same level of difficulty each year, or in comparison with other boards in the same year. But these small fluctuations are accommodated by varying grade boundaries. So, in one year one board’s specification might be considered ‘hard’ and another ‘easy’, but flexing grade boundaries irons these differences away. And the reverse might be true the next year, and again the grade boundaries will adjust. These differences and the subsequent adjustments in grade boundaries are something we monitor each year in order to maintain standards.
Teachers and pupils alike are increasingly knowledgeable about the process of setting grade boundaries, often triggered by a discussion of a particularly challenging question, such as ‘Hannah’s sweets’ in 2015. In summary, we use statistical predictions based on the prior attainment of each boards’ entry (KS2 at GCSE and mean GCSE at A level) as a starting point. If the prior attainment of the pupils entering two exam boards’ exams were identical then we would expect that the results of the two exam boards would be identical.
But that’s not to say the exam boards (and Ofqual) don’t carefully consider other sources of evidence. The senior examiners responsible for setting grade boundaries at each board also scrutinise the quality of pupils’ work, the difficulty of the exam compared to previous years, and any feedback they have received on how the exam has performed.
Most awards are close to that predicted statistically, but exam boards can submit evidence to us to explain why it is right that their results do not match what was predicted. We very carefully analyse any evidence submitted and are often persuaded. When we are not persuaded we challenge exam boards so that each award has a bank of evidence to support it.
We also monitor the content of exam papers and challenge exam boards if, for example, we detect unacceptable levels of challenge or undue predictability of questions. And we are also piloting new sources of evidence, such as the National Reference Test to support the maintenance of standards over time. These various processes mean we are confident that any variation in exam boards’ approaches to assessment are taken into account during the setting of grade boundaries.
For these reasons we believe that the search for the ‘easiest’ exam board is misguided. There may also be a tendency for teachers to think that certain styles of question are likely to be easier or harder than pupils actually find them when they sit the exam. For example, research shows that teachers tend to underestimate the difficulty of multiple choice questions, and overestimate the difficulty of more open ended questions. Teachers tend to overestimate the difficulty of wordy questions compared to how pupils actually perform on them. The inclusion of basic calculations in questions can also throw teachers’ judgements of difficulty. We’ll be publishing work on this shortly.
In addition, we know that when teachers change exam board it is no small decision and they often change lots of other things at the same time. Therefore an improvement in results might also be misattributed to the adoption of an ‘easier’ specification, rather than these other changes. And how often is a consequent drop in results just not talked about?
So the next time you see or hear a debate about which board’s specification is ‘harder’ or ‘easier’ than the next, you can be sure that we’re thinking about it too. This leaves schools to decide their specification based on an individual teacher’s interests and their pupils’ needs, which must be a better basis for choosing an exam board.
Michelle Meadows
Executive Director – Strategy, Risk and Research
1 comment
Comment by Rhonda Merrick posted on
I'm actually more interested in costs.