Earlier this year we announced our plans to develop and pilot a national reference test. Some teachers, parents and students are understandably concerned about “yet another test”, but others are positive, appreciating why we think it right to try and develop such a test.
Of course, schools selected to participate want to know that the test is manageable. We have worked closely with the Association of School and College Leaders and others to design a test that minimises the load on schools and students, and we’ve been testing those arrangements. We know schools will want the tests to run smoothly on the day and, like us, will not want any unnecessary pressure placed on students relatively soon before GCSEs. So what are we introducing?
We are developing and piloting the national reference test because we want to see whether it can make GCSE awarding more able to recognise changes in student performance over time. For many years, exam boards have used data and statistical predictions to guide awarding, and we expect them to tell us if they have any reason to doubt their statistical predictions and any reason to award more or less generously than predicted.
We approved 50 or so requests this year to award beyond prediction, and refused none. Even so, teachers are concerned that their efforts to raise student achievement may not be fully reflected in GCSE results. The reference test is designed to provide an additional source of evidence of student performance that, over time, may help with those awarding judgments. Another source of evidence if you like, alongside student scripts and the other available data and statistical information.
We believe that participation in the test is in the public interest, as the results should help to provide evidence of national performance that can be reflected in awarding – but, of course, it must be manageable for all involved.
Our plan is that each year a different sample of around 300 schools will be asked to take part. Students in year 11 will be selected at random from each school to provide a nationally representative sample. About 30 students at each school will take an English paper and about another 30 will take a maths paper. Each paper will last roughly one hour. Students will not have to prepare and the burden on schools will be limited through the provision of an external administrator (from NFER) to invigilate.
So, where are we now? We recently completed a series of field trials with 175 schools. More than 4,000 students answered a range of questions to help us to finalise the test materials we will use in our full-scale trial in March next year.
Participation in the trials was voluntary, so we were delighted to see such a positive response. Initial feedback from teachers has been very positive, especially with regards to the lack of intrusion into the day-to-day running of their schools. And we’ve been told that many of them used the trial as an exercise for students to practise working under exam conditions. I am grateful to those schools and students who took part; it has been a valuable step in helping us refine the details of how the test is to be administered.
NFER is now approaching schools to take part in the full-scale trial and I would strongly encourage those invited to take part. The trial will only be successful if the participants are nationally representative.
The trial is important so that Ofqual can test its school sampling strategy and be as sure as possible that the questions asked of students are sufficiently well-targeted. What’s more, it gives us the first chance we have had to see whether the test is able to differentiate exceptionally well, and sufficiently well, between the broad range of candidates that will sit their GCSEs. Until we see the results, we can’t be sure. We are eager too that schools participate — and find participation less burdensome than they perhaps feared.