An Unpopular Truth: The English GCSE ‘Fiasco’

There’s been a lot written over the past week about the GCSE English results and the alleged unfairness in this year’s results, particularly around the C/D borderline. Plenty of those blogs explain how unfair it has been on the students, the schools and the exact science behind it. I wanted to wait until Ofqual had reported on the issue before commenting.

Ofqual have basically confirmed what we all knew, the grade boundaries in January (due to the small nature of students sitting the exam) were generous. The problem for schools is that they used this data to predict whether their students would acheive a C grade or not and when the grade boundaries were changed it moved the goal posts. You’ll notice in the media that when schools are claiming that their results have dropped by X% that this is generally against their predictions and not against previous results (although undoubtably schools have suffered year on year in some instances).

The massive shift in the grade boundaries for the C grade (10 marks on the exam, 2 marks on the speaking and listening and 3 marks on the controlled assessment) is practically unheard of. It does mean that students who achieved the same mark in January and June could have achieved a different overall grade. This, however, is nothing new. Grade boundaries have always shifted between exam series, in a subject such as English it ensures that different exam series are fair and that grades between exams are comparable. They just don’t normally shift by this much.

There’s an added complication to this year’s GCSE exams that hasn’t been widely commented on in the media – it was the first time the 2010 specifications certificated. This means the first time students were awarded a GCSE grade on this course.

While some schools have achieved fewer C (and above) grades, many others have gone up. This is evidenced in the fact that overall number of C+ grades is only down marginally on previous years. The ones that have dropped tend to be the schools that are generally considering high achieving and are generally in middle class, leafy suburban areas. The schools that have protected (and in some cases, improved) their A*-C % tend to be the schools that find it much harder because they’re in more deprived, inner-city, areas. They’re the schools that have to work harder to get the C grades. They’re the schools that focused heavily on the new specs and the modular system, making use of the different entry points and ensuring they paid particular attention to the change in assessment objectives. Some people might argue they found the loopholes and exploited them. They probably did. I know we spent a lot of time navigating our way through the many different pathways and options. We did it for two reasons. Number one because we wanted to improve our students’ grades and help them move on to whatever was next and two because we needed to improve our school’s results. We aren’t a traditional leafy class school but neither are we an inner-city ‘comp’ (they’re mostly academies today anyway). We’re a rural ‘coasting’ school.

Lots of schools who have dropped %’s because of the boundary changes only submitted work in the summer. It was, for many, the first time they submitted controlled assessments (unit 3), speaking and listening (unit 2) and students sat the exam (unit 1). We didn’t. We entered all students in for the exam in the very first sitting possible – January 2010. We wanted to see how well our students did (not very well) and to see how the exam board were going to mark them, what the papers were going to be like etc. We learned a lot from that exam, we analysed the results carefully, we got a senior marker from AQA into the school to go through the Unit 1 exam with the department and we used that (in conjuction with an AST we brought in four days a week) to create a ‘how to ensure Year 11 get a C grade’ guide.

In addition we created a detailed tracking sheet (I’ve posted it to the language list, and if people would find it useful, will share it here) that tracks where EVERY student in the year group was. We analysed every controlled assessment, every speaking and listening task and every mock exam. If it was below target – they redid it. I imagine most schools do this, but I also suspect this is where some schools fell down. In order to achieve a C at GCSE a student needs to achieve 180 UMS marks. That never changes – the amount of UMS that a specific mark achieves does change. We always tracked using the January 2011 grade boundaries (which held in June 2011, January 2012 but NOT June 2012). I imagine most schools did this. What we did was build in a contingency – we said 187 was a C, 217 was a B, 247 was an A etc. This meant that when the grade boundaries shifted, we lost a few % and a handful of gutted students achieved a D rather than a C (generally missing out by 1 or 2 UMS marks).

Ofqual has laid the blame squarely at the door of some schools, claiming in their report today that schools overpredicted. They did, and they didn’t build in any tolerance into their own systems, but they did so based on previous data from a new specification that changed at the last minute. Are schools to blame for this? I don’t think so, but neither are the exam boards. Both were stuck between a rock and a hard place, and both have been left with egg on their face. We successfully managed to avoid this situation.

We did so by also working hard on the controlled assessments and the moderation to ensure that our grades weren’t changed during external moderation. We internally moderated EVERY piece of controlled assessment the students completed. We internally moderated a selection of folders at the end of the course before the marks were sent off to the exam board. We went along to AQA meetings with our local coursework advisor and took sample pieces of marked work to check that our marking was accurate. Our controlled assessment marks were upheld. We were affected by the boundary change for the controlled assessments, but because we’d built in a contingency it didn’t affect our grades.

All this took time. It took me nearly two weeks to build the tracking spreadsheets, it took up endless department meetings standarising and moderating the work, but it was worth it. We protected our students and we did our best by our students. We listened to the exam board, we worried that grade boundaries might change as it was the first time some schools entered entire cohorts and we took advice from external sources. This year our A*-C in English went up from 69% to 76%.

I’m proud of what we achieved, but as this summer shows we have to be very careful not to become complacent.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s