Richard Challoner School

1. The Process and Quality Assurance

Which assessment evidence has been used in the process, and how?

In each subject area, we started by pulling together data from a range of assessments, putting together an ‘evidence pack’ within each subject area for each student:

  • Assessments completed during the recent assessment windows
  • Year 11 mock exams (for GCSE subjects)
  • Year 13 exams (from the exam week in late September/ early October) (for A Level students)
  • Other robust assessments that were ‘uniform’ across the cohort (including NEA/ coursework where this was part of the course). For these other assessments, consideration was also given to things like the level of control under which they were done and the coverage of specific assessment objectives.

Although some of these assessments had previously had specific grades attached to them, the decision was made to essentially remove those grades from the process in favour of working, initially, with percentage scores from these assessments. (Although assigning grades to smaller, specific pieces of work is something that often happens in normal times to provide a useful indicator of the quality of that work to students/ parents/ staff, it is essentially a rather crude indicator unless it is part of a full exam suite).

Subject teams then went through a rigorous process of deciding how to weight each of these pieces of assessment evidence, taking into account considerations such as when the assessment was sat (i.e. recently or earlier in the course) and the coverage of the exam specification assessment objectives across assessments.

The ‘weighted’ percentage scores from each of these assessments were then used to produce a final score for each student.

For example:

In this example, the scores from the most recent assessment windows have a combined weighting of 0.7 (0.4 for paper 1 and 0.3 for paper 2), reflecting the guidance that judgements should lean more heavily on recent work. The mock exam assessed a relatively narrow proportion of the overall assessment objectives and so has been given a relatively small weighting (0.2), likewise for the interim assessment.

 

How did you then arrive at actual grades for year 11 and year 13 students?

Having arrived at amalgamated final scores, subject teams then set about assigning grade boundaries. This process was done using candidate numbers rather than names, in order to reduce the chance of unconscious bias shaping these decisions.

The evidence available for anonymised individuals was sampled in order to award a final grade to that student’s body of work in its entirety, rather than trying to grade individual pieces of work. This involved subject teachers reviewing physical papers from the recent assessment window alongside the grade descriptors provided to teachers by JCQ (see HERE), as well as exemplar grading material provided by the exam boards. Rigorous discussions within subject teams took place to ensure that the grades awarded to each students’ body of evidence were fair and reflective of the evidence available.

 

Have you taken into account special circumstances owing to the disruption faced by individual students (eg absences during assessments, prolonged periods of absence for individuals, family bereavement etc)?

Yes. Where a student was absent for a particular assessment or faced a substantial barrier to accessing lessons in the lead-up to a particular assessment, adjustments were made. Likewise, in the rare circumstance that a subject team were using evidence from assessments in which students did not receive their usual access arrangements, adjustments were made where it was deemed necessary.

Examples of the sorts of adjustments made in these situations includes changes to the weighting given to different assessments, the use of alternative evidence for individuals, or having additional marks added to specific assessments (as happens in a normal exam year for students judged to be deserving of ‘special consideration’).

 

How have you quality assured this process?

Over the course of this process, there have been a series of compulsory staff training sessions to ensure:

  • A shared understanding of how the process was to be conducted
  • Staff training on potential sources of bias and unconscious effects on objectivity, using Ofqual’s guidance on making objective judgements (see HERE)

Preparing for the assessment windows:

  • Robust discussion in teams about what should be assessed during the assessment windows in terms of ensuring sufficient coverage of assessment objectives, taking into account disruption to learning etc.
  • Consistent approach to preparation across classes in terms of revision materials and support provided to students
  • Robust discussions between SLT and middle leaders

During the marking of work completed in the assessment windows:

  • Candidate numbers were used rather than names. This was done to reduce the chance of unconscious bias or other effects on objectivity
  • A range of other approaches were taken across subject teams to standardise marking, including (amongst other things) extensive moderation activities and collaborative approaches to marking. In subjects which are taught by a single teacher, this included working with colleagues in other schools to assist with standardisation.

During the grading process:

  • Centralised data sheets were used to ensure transparency and accurate recording
  • The process of assigning grades and setting grade boundaries (as detailed above) was done using candidate numbers rather than names, where this was possible (where this process involved reviewing recordings of student performance, for example, this was not possible).
  • Subject teams used the grade descriptors provided by JCQ (See HERE) and exemplar grading materials provided by the exam boards.
  • Robust discussions between SLT and middle leaders

After the grading:

  • Use of historic data and aspirational target setting data to support benchmarking
  • Review by SLT of data, including detailed analysis using specialist analysis software

 

Where can I read more about your policy?

Our Centre Policy, which was submitted for external quality assurance, is available below.