‘banning’ underage sexting on social media


When the UK government is not busy looking for ways to invade internet users’ privacy, it’s looking for ways to restrict what they are able to do online — particularly when it comes to things of a sexual nature.

The health secretary Jeremy Hunt has made calls for technology companies and social media to do more to tackle the problems of cyberbullying, online intimidation and — rather specifically — under-18-year-olds texting sexually explicit images. Of course, he doesn’t have the slightest idea about how to go about tackling these problems, but he has expressed his concern so that, in conjunction with passing this buck to tech companies, should be enough, right?

The setting in which Hunt raised his concerns was at the Commons Health Committee on suicide prevention, so far, so social warrior. He hand-wringingly refers to the worrying mental health problems exhibited by youngsters, and then points to bullying and sexual imagery as a cause for them. He asks what probably sounds to him a perfectly reasonable question — unfortunately, it gives away his complete lack of understanding of technology in general, and the internet and mobile more specifically:

I think social media companies need to step up to the plate and show us how they can be the solution to the issue of mental ill health amongst teenagers, and not the cause of the problem. There is a lot of evidence that the technology industry, if they put their mind to it, can do really smart things.

For example, I just ask myself the simple question as to why it is that you can’t prevent the texting of sexually explicit images by people under the age of 18, if that’s a lock that parents choose to put on a mobile phone contract. Because there is technology that can identify sexually explicit pictures and prevent it being transmitted.

If this technology exists — and is actually reliable — someone has been keeping very quiet about it. AI and image recognition may have come a long way in recent years, but there is no system, no software that is anywhere 100 percent accurate. And if a system is not 100 percent accurate — or at least very close — it means that the occasions on which it gets things wrong are going to be problematic for other people; problematic in ways that it is near-impossible to predict.

View more  Steve Wozniak launches online university aimed at making tech ed more affordable

Hunt continues to think aloud, asking the likes of Facebook to help out in his dream of a sex-free, pure, virginal internet and mobile system:

I ask myself why we can’t identify cyberbullying when it happens on social media platforms by word pattern recognition, and then prevent it happening. I think there are a lot of things where social media companies could put options in their software that could reduce the risks associated with social media, and I do think that is something which they should actively pursue in a way that hasn’t happened to date.

What Hunt proposes might well be possible if the government had complete control over the internet. It might like this idea but, thankfully, it is not (yet) the case. He overlooks so many things in making his ‘simple’ suggestions that it makes you question whether he should really be talking about such things. Aside from the issue of being able to correctly identify the contents of images, has he also come up with a system that magically and correctly knows everyone’s ages — and cannot be bypassed or fooled?

View more  63 million LinkedIn users have weak passwords

Has he thought about encrypted messages and how these would be analyzed? Has he thought about the privacy implication of any of this? Has he thought about the countless channels through which messages can be sent? Are they all to be monitored? Who is to pay for all this?

I’ll leave the final word to Jonathan Haynes who, writing for the Guardian, says:

The idea of identifying bullying through word pattern recognition that cannot be circumvented is sadly laughable. While machine learning is progressing, its application to high level language is still limited to flagging potential issues for a humans to then evaluate. It is of no use in a real-time situation.

Children are particularly adept at creating new meanings and phrases to intimidate or coerce. No proscription of what can and cannot be said is going to stop that. Is the blanket censorship of non-approved communications for all under 18s — something that goes far further than even the Great Firewall of China — really the kind of thing a government minister should be able to idly suggest in 2016?

Image credit: Georgejmclittle / Shutterstock