Since the start of the autumn term, we’ve been asked an increasing number of questions about next summer’s awarding process. Many of these questions are being addressed through our 9 to 1 campaign. We’re also aware that teachers, students and parents are discussing these issue online, through social media, blogs and various forums.
One of the more detailed questions has been raised with us directly, and in open discussion, has been about how we are going to use statistical predictions for 2017. For example, we say that the proportion of students in 2017 achieving grade 4 and above will be similar to the proportion of students in 2016 achieving grade C or above. What many people want to know is, which students are we comparing: all students or 16 year-olds only?
So, to clarify, the predictions that exam boards will use will be based on 16 year-old students who can be matched to their prior attainment at key stage 2. This is an established methodology. There is more information about how those predictions work in the inter-board comparability report from December 2015 and in April 2016 we published modelling work, based on 16 year-olds, to show possible grade distributions in summer 2017.
Whenever exam boards use these predictions, the principle is always to compare like with like. If we want to predict the performance of 16 year-olds, then exam boards will look back at the performance of 16 year-olds in previous years.
One of the reasons for using statistical predictions is to avoid unfairly disadvantaging this year’s year 11 cohort. It’s likely that most students beyond key stage 4 will be entering for the legacy GCSEs in summer 2017 and so the cohort for the new 9 to 1 GCSE will be unusual in that it will be largely 16 year-olds. Following the like with like principle, therefore we’ll be making comparisons with 16 year-olds in 2016, not with all students of all ages.
Anyone who has seen some of the analysis we have published on results days in previous years will be aware of this like with like principle: we regularly make the point that it is more meaningful to compare year-on-year results for 16 year-olds, particularly when entry patterns continue to fluctuate.
One of the benefits of using statistical predictions is to achieve comparability of standards between exam boards. Exam boards will use predictions, based on national outcomes, and that will allow us to judge the comparability of grade standards between the different boards. Because the predictions are based on national outcomes and then tailored to each exam board’s cohort, they will also take account of any significant shifts in entry between exam boards.
In mathematics we have also been working to achieve comparability of grade standards where the same grades are available on both the foundation and higher tiers. Before these qualifications were designed, we put in place rules to secure comparability between tiers. Exam boards must design their papers so that at least 20% of the marks are from tasks common to foundation and higher tier, and they must put in place arrangements to secure comparability between tiers. We are currently discussing with exam boards the precise detail of how this will work, with the intention being to agree a common approach to using examiner judgement combined with statistical information about how well students performed on the common questions, to align standards across the tiers.
We’re working closely with the exam boards to make sure that when they’re setting standards in summer 2017 they continue to follow the like with like principle, so that this year’s year 11 cohort is not disadvantaged by being the first to sit these new qualifications.
Associate Director, Standards and Comparability