If you were engaged in education in 2002 you will probably be familiar with the 'curriculum 2000 effect'. Students starting new modular AS or A levels in September 2000 generally started four subjects at AS. The aim was to do these and then take three on to the full A level. And that’s what most did. But while students said they ‘knew’ at the outset which subject they would drop, they often changed their minds when they got their AS results, dropping instead what appeared to be their weakest subject. The combined effect of thousands of students taking forward their 3 best subjects was that fewer students failed one of their A levels, and the overall pass rate rose.
Fast forward to summer 2017, and we know some teachers are worried about the reverse happening. So will the lack of formal feedback at the end of year 12 mean overall pass rates fall?
Different approaches by schools
New A levels are structurally different from their predecessors: they are linear, there are generally fewer papers and there is less coursework. All these changes could cause performance to drop in the first year of a new qualification (see our research on the sawtooth effect).
But we know the new A levels have also brought about behavioural changes. We've been talking to schools and colleges about the impact of linear A levels and it’s clear that different approaches are being taken:
- some have moved to three A levels as the default model
- some are using AS level as an ‘exit qualification’ to be taken only in the subject that a student is dropping
- some are entering students for the AS in all A level subjects
- some schools have not changed their approach at all
And we will never know whether there are students who have chosen their subjects this year because they wanted to take only new A levels, or because they wanted to take only A levels that have yet to be reformed.
To be fair...
When qualifications change, we follow the principle of comparable outcomes – this means that if the national cohort for a subject is similar (in terms of past performance) to last year, then results should also be similar at a national level in that subject. So exam boards will control for the impact of the changes such that this year’s cohort is not unfairly disadvantaged. They will be relying heavily on the statistical evidence to do this, but also using senior examiners to check the grade boundaries that the statistics are pointing to.
Does that mean that comparable outcomes will compensate for more than just the structural changes this summer?
In considering this with exam boards, we thought about whether we can disentangle the impact of a student taking a linear A level with less coursework from the impact of that student taking only three A levels. And what about the fact that different schools and colleges are taking very different approaches? And the fact that not all students will get formal feedback from entering the AS, but might get more teaching time for their A level?
In reality, we can't disentangle these behavioural effects from the known structural changes. So, the starting point for teachers thinking about how students will perform this summer in new A levels should be the same as ours, with a focus on the principle of comparable outcomes – if the national cohort for a subject this year is similar to last year, you should expect national results in that subject to be similar too.
You may also find the following blogs of interest:
If you have a question, do leave a comment or contact us at firstname.lastname@example.org.
Associate Director, Standards and Comparability