Frequently Asked Questions
We've heard of Comparative Judgement (CJ) what's Adaptive Comparative Judgement (ACJ)?
ACJ will probably look the same as any CJ product to the user. The 'Adaptive' part of the gets to work as the session unfolds, usually after all of the scripts have been seen by all judges a number of times.
"The A for ‘adaptivity’ in ACJ means that the choice of which objects are presented to the judges depends on the outcomes of judgments made so far – the idea being to use judge time efficiently by not presenting objects to compare where the result of the comparison is almost certain." Bramley, T. (2015)
The adaptive element of the ACJ engine used in this assessment seeks to use its’ algorithm to ‘fine-tune’ judgements by referring back to previous pairwise judgements involving the same scripts judged by other judges rather than simply pairing scripts randomly. In this way, the algorithm can build confidence in the CPC rank placement, for each script more quickly, avoiding unnecessary judgements that would result from non-adaptive script pairings. This, in turn, reduces the overall time required to reach a final CPC rank order, whilst maintaining a strong levels of overall assessment reliability.
Can we moderate using ACJ in a group with other schools?
Yes. One benefit of using COMPARE to discover the CPC is that schools receive a report against all other scripts in the benchmark session. One of our USPs is we can also enable our system to produce a separate section within your main report that we call the 'Region CPC' that will also group schools, LAs, MATs or any number of schools using the same codes - great for moderation! Previously we have used these codes for the benefit of Local Authority and MAT benchmarking. You can see what these might reports look like here.
How many judges do we need?
The more the merrier! We think that the more teachers involved in the judging sessions the better the formative discussions between staff as to what great writing looks like. We recommend at least 5 judges per session per school to reduce the number of judgements that need to be made.
If you're judging in your own school and not part of the benchmarking sessions, this is how we work out the number of paired scripts each judge needs to view:
Each script needs to be judged 20 times. So:
Number of scripts X 20 = number of judgements to be made, divide by 2 = number of paired judgements, divide by number of judges to find the number of judgements each judge needs to do.
For example:
45 pupil scripts X 20 = 900 paired scripts, divide by 2 = 450 paired judgements, divide 9 (teachers) = each judge completes 50 judgements.
Children as writers and judges: 30 pupil scripts X 20 = 600 paired scripts, divide by 2 = 300 paired judgements, divide by 30 (all students) = each child completes approx 10 judgements.
Do we judge or benchmark our own children's work?
No. Unlike some CJ engines you will judge a randomly allocated selection of scripts from across the session meaning it's less likely that you will judge scripts from your school. After all, you already know what writing looks like in your school and teachers really benefit from observing work from outside their context - great for moderation and improving teaching and learning.
The benefit of this is that there will be much less judgment bias. For instance, you may get higher bias if you judge your own school's work directly against other school's, the bias is likely to be increased as you naturally want your own school to do well. With COMPARE the way we allocate scripts to judges means that you're highly unlikely, won't know or be able to work out whose scripts are whose.
Imagine being able to say that the CPC judgement report your school receives will have been made by 20 professionals, who don't know your school or your children!
Pressure for remarking needed? With challenging issues and high cost? We don't think so!
Will we be able to to find out who's 'working towards', 'working at' or 'exceeding' at the end of Y6 for writing assessment?
No. The DfE produce a comprehensive criteria based assessment for end of key stage judgements. You'll need to use this guidance and the exemplars to understand the level of writing attainment in your school. Comparative Judgement cannot do this for you as it's a different type of assessment. We have heard of using the percentages of teacher assessment from previous year's national writing results and matching them to the reported percentile in sessions. However, this is unhelpful for schools and unreliable as much depends on sample size, generalisable nature of the sample, annual change of criteria from DfE. You can however use the pieces of work used for each session in your portfolio of evidence for end of key stage moderation, which is a good thing for reducing workload and maintaining a lowstakes approach for your pupils.
What's the judge misfit score?
Comparative Judgement works by using a mathematical algorithm to understand the likelyhood of judges agreeing that script A is better than script B or vice versa. It essentially judges consistency of the judge against other judges to choose, for example, A over B and always A over C to derive the PCC. Misfits occur when one judge consistently disagrees with the judgements that other judges make when comparing similarly ranked scripts. High Misfit scores mean that that judge scores differently than others. This isn't always negative and can lead to professional discussions about what qualities judges look for in subject competencies. Judges can be removed for computational convinience or remain for conceptual purity.
There are 2 other CJ companies doing this, why Assess Progress?
Our USPs are:
- Four COMPARE sessions in Y6 and two in Y1-5 per year.
- Adaptive Comparative Judgement.
- Regional CPC; ability to subdefine the CPC order by a group of schools (MAT, Academy Chain, LA).
- Rarely, if ever, judge your own children's/school's work in benchmarked sessions.
- Ability to use any type of media in a COMPARE session: PDF, MP4, HTML, Weblink etc.
- Unlimited use in Secondary schools with ability for those schools to arrange benchmarking with other schools in subjects not usually supported.
* Bramley, T. (2015). Investigating the reliability of Adaptive Comparative Judgment. Cambridge Assessment Research Report. Cambridge, UK: Cambridge Assessment.