Thursday, 20 July 2017

Carpet bombing with surveys


Over the course of the twentieth century, airpower became a prime military resource. The strategy, it seemed, was to bombard the enemy into submission using volume, rather than targeted, precise strikes. Weight of numbers was, by all accounts, all that mattered. Not just in terms of actual destruction, but also in terms of attrition.

But what exactly has Western military strategy got to do with further education? The exact same volume strategy espoused by our air force seems to pervade elements of FE, such as the student survey.

Saturation is reaching endemic proportions in student surveys, with leaders using them as a crutch to validate decisions. But just how accurate are they in finding out the true facts of the situation?

When we take a quick look at some of the highest profile polls (essentially like a student survey, but on a grander scale) recently, they aren’t doing so well at making predictions. Polls on Brexit, General Election 2015, and a range of other topics have highlighted just how out of step these surveys can be with the reality of the situation.

And it’s much the same issue with surveys in the further education sector. Our College has witnessed similar levels in student surveys year on year, and while that’s no bad thing – as we have relatively high levels of student satisfaction – I’m starting to believe there should be more variation. For instance, our offer for students has changed – and I’d argue improved – with new facilities, new opportunities and an enhanced student experience. Despite these changes, student satisfaction has remained largely the same, with minimal fluctuations.

So why are we seeing level survey results, with no great change? Could it be that people are becoming apathetic to surveys due to the sheer number of them? In a digital society, we’re constantly being confronted by endless surveys, with marketers, colleagues, friends and everyone else asking our opinion. Survey fatigue is happening, and we seem to be living it in FE right now.

There may also be some cause to suggest that students’ survey responses are driven by their desire to support and protect tutors, potentially skewing the results away from the reality of the situation. This is particularly likely in ‘end of year’ satisfaction surveys, by which point students will have built strong connections with their tutors.

Whether the surveys are accurate or not though, perhaps the more important question is whether we, in FE and our wider society, are becoming excessively reliant on surveys to guide the work we’re doing? I think surveys are becoming a crutch, and giving people false hope that what they are doing is right. And that’s a problem, as you’ll never get what people actually want by adopting that approach.

So what’s the answer to this problem? Well this is where we can return to military airpower. When senior leaders in the Air Force began to recognize that a strategy of bombing everything was not a particularly effective one, they moved to a far more targeted, so-called ‘precision’ strategy. This involved work to tackle specific targets, and I’m sure you’ll all remember those videos from the Gulf War of commanders watching smart-bombs being laser-guided towards their targets.


If we translate that into our survey issue, should we not be using qualitative analysis – which removes potential positive bias due to students seeking to protect tutors – with heavily targeted individual surveys to really delve deeply into issues?
We may not get as many consistently good results, but there’s every chance it’ll give us better, more substantial and meaningful intelligence so that we can really start to drive regular improvements in what we do. And surely that’s exactly the result we all want for our students.

2 comments:

  1. Like those Lazer guided bombs, we should question just how targeted any survey is likely to be. Although online tools like 'Survey Monkey' make it easier for anyone to author and publish a survey, they are unable to guarantee that they will be well written and of any relevance to the topic they wish to examine, or, that those analysing the results will be able to make any meaning of it. (Assuming, as you point out, that it's being used as anymore than a justification for decisions already taken)

    If something must be seen to be done it has now become perhaps too easy to make that action sending a survey.

    Phil

    ReplyDelete
  2. The problem with student surveys is that they seek opinions not facts. For example, a question may ask about Lessons starting on time, but it is not worded to give a quantitative answer but a qualitative one. These are usually based on a 5 point scale ranging from Poor through to Excellent, and we are all familiar with these surveys. However, many may not be aware that their inventor, the American Psychologist Renesis Likert, cautioned against their values being treated as scores as it is not possible to sum human opinion and certainly invalid to then start performing further mathematical operations such as calculating a class or departmental average to produce a quantitative outcome from qualitative data.

    Therefore I would agree with Graham's premise that a more targeted approach is required but to return to a military analogy even laser guided weapons miss their targets or hit the wrong people especially when operating in inclement weather conditions such as fog. Likewise, in the fog of FE (not the fog of war!) where can managers find honest data or opinions? Perhaps student forums offer more insightful evidence as to successful classroom management or otherwise. As an FE teaching practitioner I watch the speed with which surveys are completed by students and the haste in which they are finished probably invalidates many answers.

    I do recall once glimpsing a student doing a computer survey and scoring me as "very unsatisfactory" for the time it took me to mark and return his work. I pride myself on the effectiveness of my marking system with most work being returned within 7 days or sooner. I did not challenge the student as I had already breached his confidentialit and I was on hand to help students if there were words in the survey they did not understand or had IT challenges in completing the survey. The truth of the matter was the student in question had yet to hand in any assignment for marking and probably scored me negatively as I was constantly "on his case" in order to try and get him to finish a piece of work for marking. Tutors can be at the mercy of the mean-minded or wrong headed, in the same way as the student who is rewarding the Tutor with a high score. Perhaps the answer is to continue to use all the tools we have but remember that we only get an impression of the student learning experience and not a quantitative result to be used for "scoring" purposes.
    Howard

    ReplyDelete