Tag Archives: assessment

All researchED events kick off well and whilst I’ve sat through a fair few ‘we aren’t expecting a fire alarm test today so if you hear a continuous bell…’ housekeeping announcements in my time, never before has this included instructions of what todo in the event of a lock-down. This made for a very exciting start to rEDBrum (and a silent wonder if each of the change-over bells would run to a count of 5 and we’d (very sensibly and in an orderly fashion) be required to dive under the tables). It didn’t happen (for the best really). It was also the most purplest school I’ve ever been to and if the TES do an award for having a theme and running with it,  Dame Elizabeth Cadbury School wins hands down. The announcements and introductions were swiftly followed by a mini-keynote from Daisy Christodoulou on why we need to improve assessment and this ended up an unintentional but welcome theme threaded though my day.This event was also the first time I brought someone else from school to a researchED conference; a surprisingly big thing for me. I’m conscious that I hold researchED dear as my ‘thing’ – there are people I know, ideas I’m familiar with and challenged by, and despite the day-to-day stresses of work I can rely on something like this to put me back on track with why I love it. Of course I share what I learn back at school but bringing someone from one work-world into my other work-world was strange but positive. Anyway, at 7:45 on a Saturday morning our school principal Marcus hopped into our car with his copy of Daisy’s ‘Making Good Progress’ to read along the way.

Introducing a ‘novice’ to all this was a refreshing way to view researchED. Aside from the general what-is-researchED-who-is-Tom-Bennett-yes-everyone-really-does-do-it-for-free stuff there were a lot of things I found myself giving an overview of that I pretty much take for granted now and think lots of other people do. A couple of speakers openly glosssed over who Dylan Wiliam or Daniel Willingham are because researchED is pretty much taken as an environment where that’s basic knowledge – a solid case of David Weston’s point on Fundamental Attribution Error. There were a lot of hands up when Tom asked whose first rED it was in his introduction and whist some may be of the opinion that it’s unthinkable for a teacher of 30 years to not know about ‘Inside the Black Box’ I reckon there’s more out there that don’t know than do.

Marcus reading ‘Making Good Progress’ was actually a happy coincidence. I’d popped into his office to arrange our travel arrangements for the weekend and rattled of a few of the people who were going to be speaking, including Daisy’s keynote, and whilst I explained some of her work he produced the book from under a pile of incident sheets and exclaimed that he knew the name was familiar (promptly writing his name in the front when I asked if it was the copy I’d leant to another colleague). He’d not started it yet but when I popped in again on Friday it’s pretty safe to say it was blowing his mind even from Wiliam’s foreword.

My own day was perhaps subconsciously threaded with assessment/feedback/progress. I’ve spent a week cramming in baselines for new pupils and using a new system for the first time (GL Assessments in English, Maths and Science alongside Hodder Reading and Spelling tests for those who are interested). Ben Newmark’s talk on the mess that is target grades and Tom Sherrington’s take down of ‘Can-do’ statements make complete sense to me and it’s confirmation that what I assumed for a long time was naivety on my part – that the reason for all these non-sensical things must have been explained on everyone’s PGCE courses and it was all a ‘teacher’ thing – is actually based on a snowballing of decisions that nobody is certain of why it’s done; it just is. I’m still moving thoughts on this round in my head so I might consolidate those more clearly at some point. I have to say though, my take-home feeling is it’s no longer just theories of better ways to approach things, there are researched models out there, Ofsted are pumping out the message that they aren’t looking for specific things and school leaders have to be really brave to leap and make these changes. The fear of change is very real for many reasons and not all leaders have the autonomy they need to really go for some of these things. I’m not sure what it’ll take to get going but I sense that as some are starting to jump off the cliff edge it won’t be as hard for others to follow.

As always I left yesterday feeling positive and full of ideas. We’ve still got a week before half term and I’m desperately trying to finish off Relay before Friday so positivity is welcome. It was great to see people I’ve not seen since September and for my fox shoes to make friends with Cat Scutt’s. I’m still rubbish at talking to people at these things and shall endeavour to do better next time. I think Marcus got a lot out of it too. I know it’s consolidated some ideas for him and given food for thought in other areas. I also know that whilst most of us on-line buddies are introverts and ignore each other IRL, he’s 100% extrovert and turns out he spoke to loads of interesting people! We’re in a big period of change at school now and I’m hopeful that there can be increase in evidence informed decision-making. At the very least I’m hoping that when I enquire nicely for time off to do yet another international conference or plant the seeds of hosting a rED event the boss’ll at least know what I’m banging on about.

Advertisements

multi

One of the challenges facing our school at the moment due to assessment without levels is our baselining procedure. Our pupils can come to us at any point from year 3 to year 9 (it’s rare that we take new pupils in KS4), with varying experiences of education. Some come straight to us from a mainstream school, some have been at the PRU and others have been allocated a number of hours of home tutoring.

We baseline new pupils upon admission in order to assess their levels, needs and eventually provide us with a measure of just how (hopefully) awesome our teaching has been. We use Hodder Oral Reading Tests and Graded Word Spelling Tests for reading and spelling (which we use across the school twice a year). Until last year we used the GOAL online assessments for English, Maths and Science, but with the removal of NC levels, this product was removed from the market and, much to the pain of SLT, nothing has replaced it.

We have reverted back to the paper version of GOAL formative assessment that we used before the online tests. This is a series of multiple choice questions with a simple ‘number correct=a NC sub-level’. There is information in the depths of the teacher’s guide to help analyse the results, but we just report the end level for our records.

An issue we find regularly with our pupils is that they come to us with huge gaps in their knowledge. Understandable if you’ve had a series of fixed-term or permanent exclusions. They’ve often missed whole topics and we find, for instance, that they’re brilliant at working out lines of symmetry but give them a 3D shape to identify and they haven’t got a clue.

Something that struck me as I baselined a new pupil the day after reading the ‘Commission on Assessment Without Levels: final report’ is how, as well as changing the way we assess generally, there must be a better way of assessing our pupils as they enter the school. We can’t rely on KS tests to give us a picture of what they can do and as the report says, ‘There is no intrinsic value in recording formative assessment; what matters is that it is acted on’. As we develop our school assessment systems we need to look at how we build baselines into this so we can identify exactly what the pupils coming to us can do and where the gaps are that we need to fill in. At the moment I give teachers an old NC sub-level that could be based on a pupil getting all the easy questions at the start of the test or all the harder ones at the end. To actually find out what the child can do, the teachers have to work it out themselves.

I was spurred further into thought as, aside from the Assessment Without Levels report, two other windows open on my computer were Michael Tidd’s resources of curriculum key objectives and Daisy Christodoulou’s slides from researchED with her focus on multiple choice questions. I’m wondering if we could use these bits of information to create our own baseline tests?

So.

What do we baseline for?

  1. Identify what the pupil knows and can do/ may need help with.
  2. Set a starting point for us to gather data and measure progress.

For me, the first reason is the most important. I get frustrated when class teachers ask me how a new pupil has done and I have to report a vague ‘sub-level with caveats’. I want to be able to give them specifics, but specifics they will actually use. Even when we did online testing, I’m not sure how much of the data we printed off was actually used to assess pupil needs.

The second point is more linked to the development of the whole school assessment policy, and debate around the worth of measuring individual pupil progress would probably add another 1000 words, however we do need to follow the Assessment Without Levels report and ensure our ‘curriculum and approach to assessment are aligned’ and I’m very aware that whatever system we come up with, we don’t ‘reinvent levels, or inappropriately jump to summary descriptions of pupils’ attainments’. The report also specifically states that ‘for pupils working below national expected levels of attainment assessment arrangements must consider progress relative to starting points and take this into account’. Baselining is our ‘starting point’ and needs to fit into this.

How do we create our own, useful, baseline procedure that identifies where our pupils are?

Well we need to think about what our pupils need – no point coming up with something that’s great for some kids but our lot flounder with. I’ve done a lot of baseline assessments of SEMH pupils. I’ve had everything from pupils that hide under tables brandishing a weapon, to ones that fly through at genius level. In my experience, they are often quite de-schooled and not used to sitting and working for any length of time; they have lower levels of literacy and big gaps in their knowledge; they’re apprehensive of coming to a new school and scared they’ll ‘fail’ the test. We need something to put them at ease and keep them engaged. My, not necessarily complete, list of requirements so far includes:

  • Easy to read/can be read to them
  • Doesn’t have to be done in one sitting
  • Adaptive to very different levels
  • Questions that aren’t too lengthy (lack of stamina/ easily distracted/ simply don’t engage if a question even looks too long.)

The baseline tests we use now have multiple choice questions. Daisy Christodoulou’s work has prompted me to think about the use of these more closely and I’m wondering if we should use our curriculum to create our own questions. Is it worth going through the questions on the existing tests and evaluating where they fit our curriculum? Daisy shows the impact of different types of question, for example, using multiple correct answers to get them to really read the options, and thinking carefully about how our choice of the incorrect answers can inform us just as much.

DaisyC

Michael Tidd’s key objectives tables break down expectations for KS1 and 2 – do I assume that it’s worth measuring what they know of these before we move them on to our KS2 and 3 curriculum or KS4 courses? Does it work like that? Could we use Michael’s work to help us create our multiple choice questions? Certainly it would be easier to do for some objectives than others. How do we do that and avoid something like the old criteria based grids? (By recognising that it’s particular questions about that objective that they can/can’t do rather than securing that objective as a whole based on a couple of questions, I suppose.) It’s the gaps we need to find and fill in if they’re to access all the work we need/want to cover, so whatever we end up with needs to be both useful and used.

Is an administered test the answer? Might teacher assessment be more useful for teachers/ more specific? (Should probably mention we run a primary model of teacher to class for most subjects throughout 7-16, with some specialist subjects). We wouldn’t get the data for tracking progress in the same way but the data we start with at the moment is inaccurate anyway and whilst we used to, we don’t do the paper GOAL tests with them again to compare. But we need to bear in mind of course that we ‘should be careful to avoid any unnecessary addition to teacher workload’.

I’ve looked around for alternatives that are on the market since we found out GOAL Online was being removed and I’ve not found a lot. The Assessment Without Levels report warns against buying in products and this probably goes for baselining too. The most promising thing I saw a while back was Alfie as it covered English, Maths and Science (we want all three, most do the first two) and you can piece together your own assessment from existing questions. We tried the GL Assessment NGRT with the Closing the Gap trial we were part of and the kids couldn’t cope with that at all – too long and in one sitting. Most gave up and guessed so the results, as beautifully presented as they were, were useless. One of our teachers has been looking at what CEM has to offer and I think it’s worth investigating, but I don’t think it’s what we’re after as a ‘useful’ baseline, certainly with the criteria I’ve thought of.

Thinking about it properly is pretty daunting. It’s a lot of work to set up – the whole process for the whole school is, but surely as we get to grips with how we approach assessment without levels, it’s worth investing time and effort in the part that’ll start them all off? Do we wait for an online ‘bank’ of questions and go from there? Finding a balance between putting together a robust test, that fits our pupils and our curriculum, and avoids excess data management for teachers, whilst ensuring they don’t have to test them again after I’ve done it, is something I’m sure is possible. It’s on the tip of my brain but I’m not sure how we start.

I wrote some ideas about CPD in school last year and I rather suspect I overstepped the mark with that so I’m cautious about suggesting around this topic. Does anyone have any answers? What do other people use? Do I just carry on with what I’m doing ‘til told otherwise or I rock the boat a bit?

 

*The picture at the top isn’t an actual question from our tests. That’s just some stuff from our mantlepiece at home. Still, it’s quite similar to some of the questions so you can see where our problems lie.*