We have said that exam boards will rely more heavily on predictions this year for the new qualifications. But how are the predictions generated? And what do we mean by prediction matrices?
I’ve talked in earlier blogs about how exam boards use predictions based on prior attainment. They are used to guide awarders’ decisions, to make sure standards are aligned between boards, and to maintain standards when qualifications change.
We’ve made a short explanatory film to show how the exam boards use results from a previous year to predict results for the current year.
A reminder of how these predictions will be used in 2017
At A level, exam boards will use predictions to make sure that, in general, a student who would previously have achieved a particular grade will achieve the same grade this year. At A level, the demand has not changed – in general, it will not be more difficult to achieve a particular grade – but the structure has changed, the subject content has been revised, and the style of papers has changed in some subjects. Senior examiners will, as is always the case, look at student work, to make sure that the grade boundaries suggested by the predictions are appropriate.
In the new 9 to 1 GCSEs, there are changes to the structure and the content is more demanding. In setting standards in these new qualifications, we have been clear that we want to provide an anchor between new and old grades. Exam boards will achieve this by using predictions to achieve the following:
- broadly the same proportion of students will achieve grade 4 and above as previously achieved grade C and above
- broadly the same proportion of students will achieve grade 7 and above as previously achieved grade A and above
- the bottom of grade 1 will be aligned with the bottom of grade G
Just like in A level, senior examiners will look at student work at these key grades, to make sure the grade boundaries suggested by the predictions are appropriate.
Exam boards will also use predictions to maintain standards in the unreformed qualifications.
If you have any suggestions for topics you would like us to cover ahead of this summer’s exam series do let us know by either commenting at the end of this blog or emailing us at email@example.com
Associate Director, Standards and Comparability
Comment by Stephen Elliott posted on
Do Ofqual have a response to this article? It raises fundamental questions. On behalf of parents struggling to understand the role OECD Pisa have in our education qualifications and examinations system in the UK please provide a reply. The shortlink to the article can be found here. http://wp.me/pateI-NW
The OECD’s Programme for International Student Assessment (Pisa) has now taken the bold step of analysing measures of “happiness,” “well-being” and “anxiety” for individual countries (see New Pisa happiness table, TES 19.04.2017 https://www.tes.com/news/school-news/breaking-news/new-pisa-happiness-table-see-where-uk-pupils-rank).
The claim is made that “life satisfaction,” for example, can be measured to two-decimal place accuracy. This begs the question, “Can complex constructs such as happiness or anxiety really be represented as a number like 7.26?” For two giants of 20th century thought – the philosopher Ludwig Wittgenstein and the father of quantum physics, Niels Bohr – the answer to this question is an unequivocal “no.”
Surely common sense itself dictates that constructs such as happiness, anxiety and well-being cannot be captured in a single number? In his book Three Seductive Ideas, the Harvard psychologist Jerome Kagan draws on the writings of Bohr and Wittgenstein to argue that measures of constructs such as happiness cannot be represented as numbers. He writes: “The first premise is that the unit of analysis … must be a person in a context, rather than an isolated characteristic of that person.” Wittgenstein and Bohr (independently) arrived at the conclusion that what is measured cannot be separated from the measurement context. It follows that when an individual’s happiness is being measured, a description of the questions on the Pisa questionnaire must appear in the measurement statement because these questions help define what the measurer means by the word happiness.
Kagan rejects the practice of reporting the measurement of complex psychological constructs using numbers: “The contrasting view, held by Whitehead [co-author of the Principia Mathematica] and Wittgenstein, insists that every description should refer to … the circumstances of the observation.” The reason for including a description of the measuring instrument isn’t difficult to see. Kagan points out that “Most investigators who study “anxiety” or “fear” use answers on a standard questionnaire or responses to an interview to decide which of their subjects are anxious or fearful. A smaller number of scientists ask close friends or relatives of each subject to evaluate how anxious the person is. A still smaller group measures the heart rate, blood pressure, galvanic skin response, or salivary level of subjects. Unfortunately, these three sources of information rarely agree.”
Given that a change in the measuring tool means a change in the reported measurement, one must include a description of the measuring instrument in order to “communicate unambiguously,” as Bohr expressed it. One can never simply write “happiness = 4.29” (as in Pisa tables) because there is no such thing as an instrument-independent measure of happiness. We have no idea what happiness is as a thing-in-itself. Kagan notes the implications for psychologists of the measurement principles set out by Niels Bohr: “Modern physicists appreciate that light can behave as a wave or a particle depending on the method of measurement. But some contemporary psychologists write as if that maxim did not apply to consciousness, intelligence, or fear.” According to Bohr, when one reports psychological measurements, the requirement to describe the measurement situation means that ordinary language must replace numbers. Werner Heisenberg summarised his mentor’s teachings: “If we want to say anything at all about nature – and what else does science try to do – we must pass from mathematical to everyday language.”
(To simplify matters somewhat, while numbers function perfectly well when observing the motion of a tennis ball or a star, the psychologist cannot observe directly the pupil’s happiness. Bohr argued that there was “a deep-going analogy” between measurement in quantum physics and measurement in psychology because both were concerned with measuring constructs which transcend the limits of ordinary experience. According to Bohr, because the physicist, like the psychologist (in respect of attempts to measure happiness), cannot directly experience electrons and photons, “physics concerns what we can say about nature,” and numbers must therefore give way to ordinary language.)
The arguments advanced above apply, without modification, to Pisa’s core activity of measuring pupil ability. A simple thought experiment (first reported in the TES of 26.07.2013) makes this clear. Suppose that a pupil is awarded a perfect score in a GCSE mathematics examination. It seems sensible to conjecture that if Einstein were alive, he too would secure a perfect score on this mathematics paper. Given the title on the front page of the examination paper, one has the clear sense that the examination measures ability in mathematics. Is one therefore justified in saying that Einstein and the pupil have the same mathematical ability?
This paradoxical outcome results from the erroneous treatment of mathematical ability as something entirely divorced from the questions which make up the examination paper. It is clear that the pupil’s mathematical achievements are dwarfed by Einstein’s; to ascribe equal ability to Einstein and the pupil is to communicate ambiguously. To avoid the paradox one simply has to detail the measurement circumstances in any report of attainment and say: “Einstein and the pupil have the same mathematical ability relative to this particular GCSE mathematics paper.” By including a description of the measuring instrument one is, in effect, making clear the restrictive meaning which attaches to the word “mathematics” as it is being used here; school mathematics omits whole areas of the discipline familiar to Einstein such as non-Euclidean geometry, tensor analysis, vector field theory, Newtonian mechanics, and so on.
As with the measurement of happiness, when one factors in a description of the measuring instrument, the paradox dissolves away. Alas for Pisa, the move from numbers to language also dissolves away that organisation’s much-lauded rank orders. Little wonder that Wittgenstein described the reasoning which underpins the statistical model (Item Response Theory) at the heart of the Pisa rankings as “a disease of thought.”
This brings us to the very serious implications for the new GCSE grade 5, of the arguments set out above. The fact that a switch from numbers to language invalidates entirely the practice of ordering countries according to the efficacy of their education systems has profound implications for the validity of claims made concerning the new GCSE grade 5. Given the assertion that grade 5 reflects the academic standards of high performing international jurisdictions as identified by their Pisa ranks, what possible justification can be offered for assigning a privileged role to the GCSE grade 5 in school performance tables?
To date, Pisa rankings have not impacted directly on the life chances of particular children in this country. This would change if individual pupils failing to reach the grade 5 standard were construed as having fallen short of international standards (whatever that means). If one accepts the reasoning of Wittgenstein and Bohr, grade 5 can represent nothing more than a standard somewhere between grade 4 and grade 6. Any attempt to accord it special status, thereby giving it a central role in the EBacc and/or performance tables, risks exposing the new GCSE grading scale to ridicule.
Dr Hugh Morrison, The Queen’s University of Belfast (retired) firstname.lastname@example.org