One of the challenges facing our school at the moment due to assessment without levels is our baselining procedure. Our pupils can come to us at any point from year 3 to year 9 (it’s rare that we take new pupils in KS4), with varying experiences of education. Some come straight to us from a mainstream school, some have been at the PRU and others have been allocated a number of hours of home tutoring.
We baseline new pupils upon admission in order to assess their levels, needs and eventually provide us with a measure of just how (hopefully) awesome our teaching has been. We use Hodder Oral Reading Tests and Graded Word Spelling Tests for reading and spelling (which we use across the school twice a year). Until last year we used the GOAL online assessments for English, Maths and Science, but with the removal of NC levels, this product was removed from the market and, much to the pain of SLT, nothing has replaced it.
We have reverted back to the paper version of GOAL formative assessment that we used before the online tests. This is a series of multiple choice questions with a simple ‘number correct=a NC sub-level’. There is information in the depths of the teacher’s guide to help analyse the results, but we just report the end level for our records.
An issue we find regularly with our pupils is that they come to us with huge gaps in their knowledge. Understandable if you’ve had a series of fixed-term or permanent exclusions. They’ve often missed whole topics and we find, for instance, that they’re brilliant at working out lines of symmetry but give them a 3D shape to identify and they haven’t got a clue.
Something that struck me as I baselined a new pupil the day after reading the ‘Commission on Assessment Without Levels: final report’ is how, as well as changing the way we assess generally, there must be a better way of assessing our pupils as they enter the school. We can’t rely on KS tests to give us a picture of what they can do and as the report says, ‘There is no intrinsic value in recording formative assessment; what matters is that it is acted on’. As we develop our school assessment systems we need to look at how we build baselines into this so we can identify exactly what the pupils coming to us can do and where the gaps are that we need to fill in. At the moment I give teachers an old NC sub-level that could be based on a pupil getting all the easy questions at the start of the test or all the harder ones at the end. To actually find out what the child can do, the teachers have to work it out themselves.
I was spurred further into thought as, aside from the Assessment Without Levels report, two other windows open on my computer were Michael Tidd’s resources of curriculum key objectives and Daisy Christodoulou’s slides from researchED with her focus on multiple choice questions. I’m wondering if we could use these bits of information to create our own baseline tests?
What do we baseline for?
- Identify what the pupil knows and can do/ may need help with.
- Set a starting point for us to gather data and measure progress.
For me, the first reason is the most important. I get frustrated when class teachers ask me how a new pupil has done and I have to report a vague ‘sub-level with caveats’. I want to be able to give them specifics, but specifics they will actually use. Even when we did online testing, I’m not sure how much of the data we printed off was actually used to assess pupil needs.
The second point is more linked to the development of the whole school assessment policy, and debate around the worth of measuring individual pupil progress would probably add another 1000 words, however we do need to follow the Assessment Without Levels report and ensure our ‘curriculum and approach to assessment are aligned’ and I’m very aware that whatever system we come up with, we don’t ‘reinvent levels, or inappropriately jump to summary descriptions of pupils’ attainments’. The report also specifically states that ‘for pupils working below national expected levels of attainment assessment arrangements must consider progress relative to starting points and take this into account’. Baselining is our ‘starting point’ and needs to fit into this.
How do we create our own, useful, baseline procedure that identifies where our pupils are?
Well we need to think about what our pupils need – no point coming up with something that’s great for some kids but our lot flounder with. I’ve done a lot of baseline assessments of SEMH pupils. I’ve had everything from pupils that hide under tables brandishing a weapon, to ones that fly through at genius level. In my experience, they are often quite de-schooled and not used to sitting and working for any length of time; they have lower levels of literacy and big gaps in their knowledge; they’re apprehensive of coming to a new school and scared they’ll ‘fail’ the test. We need something to put them at ease and keep them engaged. My, not necessarily complete, list of requirements so far includes:
- Easy to read/can be read to them
- Doesn’t have to be done in one sitting
- Adaptive to very different levels
- Questions that aren’t too lengthy (lack of stamina/ easily distracted/ simply don’t engage if a question even looks too long.)
The baseline tests we use now have multiple choice questions. Daisy Christodoulou’s work has prompted me to think about the use of these more closely and I’m wondering if we should use our curriculum to create our own questions. Is it worth going through the questions on the existing tests and evaluating where they fit our curriculum? Daisy shows the impact of different types of question, for example, using multiple correct answers to get them to really read the options, and thinking carefully about how our choice of the incorrect answers can inform us just as much.
Michael Tidd’s key objectives tables break down expectations for KS1 and 2 – do I assume that it’s worth measuring what they know of these before we move them on to our KS2 and 3 curriculum or KS4 courses? Does it work like that? Could we use Michael’s work to help us create our multiple choice questions? Certainly it would be easier to do for some objectives than others. How do we do that and avoid something like the old criteria based grids? (By recognising that it’s particular questions about that objective that they can/can’t do rather than securing that objective as a whole based on a couple of questions, I suppose.) It’s the gaps we need to find and fill in if they’re to access all the work we need/want to cover, so whatever we end up with needs to be both useful and used.
Is an administered test the answer? Might teacher assessment be more useful for teachers/ more specific? (Should probably mention we run a primary model of teacher to class for most subjects throughout 7-16, with some specialist subjects). We wouldn’t get the data for tracking progress in the same way but the data we start with at the moment is inaccurate anyway and whilst we used to, we don’t do the paper GOAL tests with them again to compare. But we need to bear in mind of course that we ‘should be careful to avoid any unnecessary addition to teacher workload’.
I’ve looked around for alternatives that are on the market since we found out GOAL Online was being removed and I’ve not found a lot. The Assessment Without Levels report warns against buying in products and this probably goes for baselining too. The most promising thing I saw a while back was Alfie as it covered English, Maths and Science (we want all three, most do the first two) and you can piece together your own assessment from existing questions. We tried the GL Assessment NGRT with the Closing the Gap trial we were part of and the kids couldn’t cope with that at all – too long and in one sitting. Most gave up and guessed so the results, as beautifully presented as they were, were useless. One of our teachers has been looking at what CEM has to offer and I think it’s worth investigating, but I don’t think it’s what we’re after as a ‘useful’ baseline, certainly with the criteria I’ve thought of.
Thinking about it properly is pretty daunting. It’s a lot of work to set up – the whole process for the whole school is, but surely as we get to grips with how we approach assessment without levels, it’s worth investing time and effort in the part that’ll start them all off? Do we wait for an online ‘bank’ of questions and go from there? Finding a balance between putting together a robust test, that fits our pupils and our curriculum, and avoids excess data management for teachers, whilst ensuring they don’t have to test them again after I’ve done it, is something I’m sure is possible. It’s on the tip of my brain but I’m not sure how we start.
I wrote some ideas about CPD in school last year and I rather suspect I overstepped the mark with that so I’m cautious about suggesting around this topic. Does anyone have any answers? What do other people use? Do I just carry on with what I’m doing ‘til told otherwise or I rock the boat a bit?
*The picture at the top isn’t an actual question from our tests. That’s just some stuff from our mantlepiece at home. Still, it’s quite similar to some of the questions so you can see where our problems lie.*