Thinking Aloud: On Statistics, Again

“99 percent of all statistics only tell 49 percent of the story.”

Ron DeLegge II

“Gents With No Cents”

My previous post on statistics proved to be very popular – with David Didau, no less, for example – and as such I felt that my promised follow-up should happen a bit quicker. So here goes.

Target setting is a controversial process. Often it’s done rather arbitrarily – “60% last year guys, so this year we’re aiming for 65%!”. Or unfairly – “we were 15% off national last year, this year we better hit national or else!”. As I discussed in the previous part of this discussion on data, the whole idea of ‘national’ is a debatable one anyway. Plus any improvement process needs to be exactly that – it’s rare that school results improve by 10, 15, 20% unless something dramatically different (or right and proper) occurs.

However I do believe there is a place for target setting, even if this goes against my beliefs of not using free market principles in public service. Unfortunately, we are in the midst of a free market takeover in education, so we have to make the system work in some semblance of fairness for staff and students alike. We need to make sure targets are reasonable, and aren’t out of the realm of possibility (like I’ve seen in some schools, where students coming out of school at 2a are set C grade targets – don’t get me wrong, I’m sure this could be achieved in some cases – but in every case?).

It does not matter how you package it up, and whatever your ethos for your school is, statistics simply do not lie (actually, a caveat here: statistics don’t lie, no, but what you can make them tell you can be significantly different from what another person might see). You can set individual targets for a student, which is fair enough, but cohort targets will tend towards a reflection of what’s happening nationally – which you’d expect as a sample increases in size.

So let’s look at how targets can be fairly generated.

The centrepiece of the previous article was the table below. The decimals in the left hand column are the proportions of students who made three levels progress (3LP) at level 2, 3, 4, 5 and an estimate for 6 respectively.

Year 7 Year 8 Year 9
Student 1 2 4 2
Student 2 3 4 2
Student 3 4 4 4
Student 4 4 4 4
Student 5 5 4 6
Student 6 6 4 6
0.15 0.15 0 0.3
0.42 0.42 0 0
0.69 1.38 4.14 1.38
0.77 0.77 0 0
0.85 0.85 0 1.7
Students making 3LP 3.57 4.14 3.38

By using the decimals and the count of students’ levels at the end of KS2, you can then use the proportion as a probability to generate an expected number of students making 3LP for each level and therefore each year group (Expectation = Trials × Probability – the trials in this case being the number of students at each KS2 starting point).

Let’s summarise what has just happened here.

  1. I used prior attainment at KS2.
  2. I used national performance measures.
  3. I generated an expected value of students making 3LP based on these figures.

As I mentioned previously, based on this notion you would expect that for the figures above Year 8 would perfom better at KS4 than the other two, so targets should reflect that. Two years ago, I realised that this was the way I would start setting targets for my department, but with a little tweak.

The issue was that generally, the proportions of students making 3LP were in line with, or bettered, national proportions at different sublevels – there were a few odd gaps, but for most of the sub-levels this was true. The problem was that because of the proportion of lower attainers we have generally, the total proportion of students making 3LP was well below national.

So, I came up with a model based on three principles:

  1. Targets should be set that reflect the school population base.
  2. Targets should be based on a national baseline, as large samples tend to tend to reflect (in proportional terms) the population as a whole.
  3. Targets can be set in a way that build in year on year improvement, so to satisfy the whims of those who judge us.

It’s simplistic, but fair.

With this in mind, I created a spreadsheet to generate targets based on these three figures, firstly for a year group, but then I realised there was no harm in doing this class by class either, so that staff could have more specific targets and show how they would be contributing to the cause, so to speak.

You can download it by clicking on the picture below (if this doesn’t work – drop me an email via my about page, or a tweet, and I’ll sort it for you).

Progress Predictor

I think it’s relatively simple to use. There are three main input areas.

The first part is the student data. Simply put the forename and surname of the students you’re analysing, and the level they obtained at the end of KS2 for your subject.

The second part is the national measure for 3 and 4LP. For each sub-level, input the proportions of students making 3LP (in yellow) and 4LP (in orange) nationally. You can get this data from your school data manager or the DfE website. This is automatically set as a percentage, so don’t worry about formatting the cells.

The third, probably most controversial part, is the school gain for 3LP and 4LP. This is the bit that’s open to discussion. Basically, this is the figure by which you can adjust your own school measure to either be inline with national (so set at 0%) or above (anything above 0%, obviously). If you want to see year on year improvement, this is how you can adjust national measures to create a school measure that will build this into targets.

Ultimately this produces three figures, under the coloured tables. The first is the % of students that you’d, erm, expect to make expected progress. Similarly there’s a % for the students that you’d expect to make good progress, and finally one for the A*-C measure, which isn’t as much at the forefront of performance measures but it’s still good to know.

You can do what you like with the spreadsheet and the format of it. I’m not precious and I’d be interested to see if people can develop this model further. If you do, let me know what you’ve come up with.

By inputting your own school data, you’re setting targets based on your school population. By inputting national measures, there’s an understanding that the performance of students in school reflects the performance of similar students nationally, rather than guesswork. Finally, if a school wants to simply match (because it has underperformed in the past or wants to maintain stability) or outperform (because the school traditionally does so) national trends then they can set a reasonable school measure to do so.

I’d be genuinely interested to know what people think of this methodology. Is it wrong? Is it right? Is it somewhere in between? Is it too simplistic? As I say, I’ve been using this system for a couple of years now, because I wanted the targets that my team were being set to be fair reflection of the students in front of them and those of similar abilities nationally.

I’ve found this method is much better appreciated and understood by teachers in my team rather than a superficial target (i.e. “we achieved 60% 3LP this year, so we should be going for 65% next” – it’s not as easy as that). By sitting down as a team and going through the numbers, it allows for an open and transparent method of setting targets and building a reflective culture.

One thing I could have done with this is build in measures for Pupil Premium and SEN students. I’m not going to do that. If schools are supposed to be closing gaps between disadvantaged students and the rest of the school population then targets should not set that build in those differences before we’ve even started.

Finally, remember this – this is just a theory. As I’ve mentioned before, it’s open for criticism (constructive or otherwise!), and it is not a reflection of my personal ideology but of the situation that leaders in education find ourselves. But if it helps make a positive difference in the efforts of all of you out there, then it’s all the better for being out in the public domain.

Advertisements

About workedgechaos

Teacher. Critic. Geek.
This entry was posted in Thinking Aloud. Bookmark the permalink.

One Response to Thinking Aloud: On Statistics, Again

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s