Over the course of the twentieth
century, airpower became a prime military resource. The strategy, it seemed,
was to bombard the enemy into submission using volume, rather than targeted,
precise strikes. Weight of numbers was, by all accounts, all that mattered. Not
just in terms of actual destruction, but also in terms of attrition.
But what exactly has Western military
strategy got to do with further education? The exact same volume strategy espoused
by our air force seems to pervade elements of FE, such as the student survey.
Saturation is reaching endemic
proportions in student surveys, with leaders using them as a crutch to validate
decisions. But just how accurate are they in finding out the true facts of the
situation?
When we take a quick look at some of the
highest profile polls (essentially like a student survey, but on a grander
scale) recently, they aren’t doing so well at making predictions. Polls on
Brexit, General Election 2015, and a range of other topics have highlighted
just how out of step these surveys can be with the reality of the situation.
And it’s much the same issue with
surveys in the further education sector. Our College has witnessed similar
levels in student surveys year on year, and while that’s no bad thing – as we
have relatively high levels of student satisfaction – I’m starting to believe
there should be more variation. For instance, our offer for students has
changed – and I’d argue improved – with new facilities, new opportunities and
an enhanced student experience. Despite these changes, student satisfaction has
remained largely the same, with minimal fluctuations.
So why are we seeing level survey
results, with no great change? Could it be that people are becoming apathetic
to surveys due to the sheer number of them? In a digital society, we’re
constantly being confronted by endless surveys, with marketers, colleagues,
friends and everyone else asking our opinion. Survey fatigue is happening, and
we seem to be living it in FE right now.
There may also be some cause to suggest
that students’ survey responses are driven by their desire to support and
protect tutors, potentially skewing the results away from the reality of the
situation. This is particularly likely in ‘end of year’ satisfaction surveys,
by which point students will have built strong connections with their tutors.
Whether the surveys are accurate or not
though, perhaps the more important question is whether we, in FE and our wider
society, are becoming excessively reliant on surveys to guide the work we’re
doing? I think surveys are becoming a crutch, and giving people false hope that
what they are doing is right. And that’s a problem, as you’ll never get what
people actually want by adopting that approach.
So what’s the answer to this problem?
Well this is where we can return to military airpower. When senior leaders in
the Air Force began to recognize that a strategy of bombing everything was not
a particularly effective one, they moved to a far more targeted, so-called
‘precision’ strategy. This involved work to tackle specific targets, and I’m
sure you’ll all remember those videos from the Gulf War of commanders watching
smart-bombs being laser-guided towards their targets.
If we translate that into our survey
issue, should we not be using qualitative analysis – which removes potential
positive bias due to students seeking to protect tutors – with heavily targeted
individual surveys to really delve deeply into issues?
We may not get as many consistently good results, but there’s every chance it’ll give us better, more substantial and meaningful intelligence so that we can really start to drive regular improvements in what we do. And surely that’s exactly the result we all want for our students.
We may not get as many consistently good results, but there’s every chance it’ll give us better, more substantial and meaningful intelligence so that we can really start to drive regular improvements in what we do. And surely that’s exactly the result we all want for our students.