I’ve been working in the education sector for almost a decade and after so long a great many things that inform my thinking become assumed, unacknowledged. Some of my greatest revelations have come from describing the things we do, the problems we have and the ways we fix them to folk outside of our wheelhouse. That’s what this post is; it’s a primer on attitudes to technology in schools through the lens of cybersafety. I’m hoping to share it with other schools and with makers of technology solutions for feedback and will hopefully update it over time.

Native talk

When I was at school there was no Internet; no Wi-Fi, social media, Netflix, or mobile phones. Disks were floppy and using a computer was hard. It’s a reality as alien to the students of today as my grandparents’ was to me (I remember my amazement, hearing them talk of writing their homework on a slate and riding hilly miles to school without gears on their bikes). I’ve had the benefit of growing up with a cornucopia of digital technologies, watching in real-time as they grew and changed and died, playing and tinkering with them, berating and eschewing them, hacking and breaking them.

Young humans today are plunged straight in to the torrent of tech and by and large left to figure it out for themselves. ‘Digital natives’ we call them, the metaphor placing my generation as the outsider – presumably so we can shrug at the strange behaviour of the natives and not feel the need to concern ourselves with it. In reality it is we who are the natives and if we do not bear the responsibility for teaching the ways of this land to newcomers, who will? How will they cultivate a healthy relationship with it and not lose themselves to it?

I often feel that the concept of Cybersafety is a token gesture in this direction; handing a few rules to tourists as they step off the plane, dazed and blinking in the sunshine. ‘Watch out for snakes, wear sunscreen, always carry water’. Phew, now it’s not our problem! We’ve done our bit. If anything goes wrong from here, it’s on them; they were warned after all. It’s always in the negative; a list of DONT’s with consequences alien to their lived experience. No wonder it’s so often disregarded.

EDU views

Let me zoom out to show what this looks like in context. Most schools subscribe to one of these perspectives:

  • “We have to stop kids from doing bad stuff so let’s block everything that’s not educational.” – The walled garden
  • “Kids are going to do bad stuff anyway so just block the minimum in order to prove basic duty of care and leave them to figure the rest out.” – The wild west

The ‘wild west’ places functionality at a premium in the belief that technology is inherently good and will fix the problem of human nature. With the ‘walled garden’ it’s safety that rules in the belief that technology is inherently harmful and we need to be protected from it.

The former interprets cybersafety as being told how to behave, yet that can be a recipe for disaster as school-aged kids don’t have fully-developed decision-making faculties.

To the latter cybersafety means the technical limitation from being able to cause or receive harm, yet it may prevent students from learning to make wise choices in their online behaviour which is no less destructive in the longer term.

Note the caveats in my critique as each approach may be valid to a degree and appropriate for a time. The needs of primary and secondary students are very different; a healthy approach would certainly involve more restrictions for juniors and fewer for seniors. However, acknowledging the need for differentiation in our approach raises awkward questions such as:

  • How do we know where to set the bar for any given student or year group?
  • What happens when:
    • The restrictions get in the way and blunt the impact of technology?
    • The less-restricted cohort are getting distracted or up to no good?
  • How do we remediate these situations?
  • How do we explain to stakeholders that we’re now muddying the waters and students will neither all be fully protected nor completely free in their access?

This steers us to the heart of the matter which is that cybersafety isn’t something we can simply tell people to do, nor is it a position on a dial we can set our systems to and walk away. In isolation neither a walled garden nor a wild west is concerned with treating a human problem and if there’s one thing I have learned, it’s that you cannot entirely solve a human problem with technology.

A third way

I would say that the ideal of cybersafety incorporates both aspects I have mentioned; technical boundaries appropriate to developmental stage combined with knowledge of appropriate behaviour, aimed at producing mature and responsible Internet citizens. So if the technocentric walled garden / wild west dichotomy does not bring us closer to this goal, why are these perspectives so commonly held and is there anything better? I attempted to communicate this in a meeting recently with this sketch:

Student Internet Safety

With safety as the x-axis we have the wild west on the left (more functional) and the walled garden on the right (more safe). The y-axis is the effort (time and resources) needed to maintain each position.

The magic midpoint (you were wondering if I’d have a cool name for it, weren’t you?) is where safety and functionality are balanced and technology can have the greatest impact; students have the latitude to take ownership of their learning in inventive ways while being aware of and respecting the limitations imposed on them. Reaching this place and holding it is costly though, since it is only possible through technological and human measures acting in concert in a continual balancing act. This means systems to restrict, monitor, log and alert on student activity, pastoral care staff to parse the reports and follow up with interventions and actions, more systems to log the results of these and feed back changes to teaching practice and technical boundaries, as well as procedures to ensure fairness and good governance.

This supports formation of cybersafe behaviour through practical training in a continual cycle of being granted appropriate access, being trusted to use it wisely, being held accountable and being rewarded for responsible behaviour with increased access:

RaedwvaarndcewmietnhtGrHfaoonlrtAdicanAcanpecytpscrrsobuourspnetrtaiacabhtleeesuTsreuswtisteoly

Trust without accountability is meaningless and progression without responsibility is dangerous, yet to manage each student through this cycle requires an enormous amount of time and resources on the part of the school.

Set against this standard, the low effort needed in an extreme of safety or functionality is appealing; on one hand we ignore all but the most egregious infractions and on the other there isn’t much latitude for causing them in the first place. The systems can be less smart, the humans less attentive. However despite the foundational expectations of technology inherent in these positions (either to solve or prevent the problem), each ultimately diminishes its capacity since students are either too distracted with unrestricted use or unengaged due to straight-jacket restrictions. Expensive devices become playthings or paperweights.

Cyber not-quite-safe?

The risks of the walled garden (reduced functionality leading to low usage / investment) and wild west (high likelihood of student distraction / malfeasance) are impossible to miss; not so with the midpoint where we often fall off to one side or the other while attempting to maintain a balance. This will result in either:

  • Accidental exposure or deliberate access to inappropriate content, or
  • Appropriate activities or content being unduly restricted.

If expectations are not correctly set, stakeholders such as teachers and parents may either be shocked at the first sign of anything not kosher appearing on a screen or become irate at unanticipated barriers placed in their way.

Perhaps a greater difficulty is illustrated by the ‘Blennerhassett Effect’ – named after its inventor, our school chaplain Wayne. He works with students day in and day out and has managed not to go (entirely) bonkers. He describes a dumb-to-dodgy scale where most students who have access to a device and unsupervised time will do dumb things (mostly harmless, time-wasting nonsense) while a very few will do dodgy things (dangerous to self or others, inappropriate or illegal):

Blennerhassett Continuum

This insight reveals a real problem; the number of incoming signals to any behaviour-monitoring technology are going to be vast, and mostly inane. The critically important ones are far fewer and risk being drowned out in the noise until they become more numerous, by which time the damage has been done and the opportunity for early intervention lost. This is where the traditional monitoring systems based on static rules (keywords, URLs, etc) become more burden than help and the need for machine learning emerges. Alerts based on static rules increase in frequency with the number of signals whereas this only serves to increase the accuracy of alerts based on pattern matches or deviations from longitudinal trends. Severity also matters; many minor breaches will likely fly beneath the radar of a rule-based system where a behaviour engine could see them as an emerging pattern.

In short, if we are to reach the ideal of cybersafety, we’re going to need really smart systems that correlate information from multiple sources to build an accurate picture of student behaviour in order to inform constructive interventions.

Dearth of interest

I referred to the magic midpoint as ‘peak effort’ earlier, and have outlined some of the reasons for this already. I’ve also shown how important a smart system would be in shouldering this burden. Sadly not only does such a system not exist, but nor it seems does the will to create it. Here is a selection of words pulled from websites of vendors competing in this space:

“School Internet security”
“K-12 content filter”
“Protect students anywhere”
“Block millions of harmful sites”
“Shield kids from risks and harms”
“Identify off-task students”
“Keeping students and data safe”
“Reduce cybersecurity risk”

These all chime perfectly with a walled garden / wild west philosophy:

Prove duty of care so you can get on with your life? ✔️
Block all the bad stuff so you don’t need to think about it? ✔️

It’s not that we shouldn’t have Internet security or be able to protect students; far from it, but none say ‘protect most of the people most of the time!’, or ‘help students and yourself learn from your mistakes!’. Perhaps this is a case of technology vendors reducing a complex issue to one with a simple answer but it’s more likely that this is simply not what most schools want. Cybersafety is more easily construed as a checklist, curriculum or appliance than dealt with the hard way - acknowledging it as a core part of each student’s ongoing development.



Comments