The BBFC’s Harmful Research


The British Censor’s new research is simply a questionnaire telling them what they wanted to hear.

The British Board of Film Classification is still stinging over the collapse of the UK government’s plans to regulate internet porn. No surprise there – as the body tasked with overseeing the unworkable legislation, the BBFC was set to earn a pretty penny (and make no mistake – the Board might be ‘non-profit’ but don’t think that its executives are not coining it in). More significantly, as a body that continually bigs itself up as the most trusted and reliable arbiter of what is or isn’t permissible for the British public to watch, finally getting hold of the internet – the wild west that makes a mockery of its often questionable decisions – was something long lusted after, and to come so close only to have it snatched way must have stung mightily.

It isn’t just porn that the BBFC have wanted to regulate – for some years, the Board has attempted to persuade both governments and individual websites that they should be having some oversight over any video content. There have often been big plans – the late, unlamented regulator ATVOD tried to enforced BBFC rules on VOD businesses and porn sites alike, and YouTube has long been a target, with the regulation of music videos being bigged up only to fizzle into nothing. But the loss of the new porn regulation seems to have put the BBFC on something of an offensive – possibly because an online harms bill is still working its way through Parliament, with the possibility of the porn block returning through the back door.

Yesterday, the BBFC issued a press release with the sensational headline “Half of children and teens exposed to harmful online content while in lockdown” – the sort of thing that has been dutifully reported by news outlets without question. It does sound shocking – but we only need to look a little further into this story for it to all start falling apart.

“New research by the British Board of Film Classification has shown that children and teens are being exposed to harmful or upsetting content while in lockdown, often on a daily basis” says the press release, but we might question what the BBFC considers to be research. Obviously, no one is out there studying the viewing habits of children and teens – ages that potentially range from one to nineteen, by the way – so what is this research? Why, it’s a questionnaire by YouGov. Now, I don’t want to be cynical here, but this does not seem a very sound or thorough research method to me. We’re not told how many children and teens were questioned, or the age ranges. Naturally, we don’t get to see the questions asked, so we have no idea how leading they were. All we know are the results, which tell us that 47% of them “have seen content they’d rather avoid, leaving them feeling uncomfortable (29%), scared (23%) and confused (19%).”

Well, I would say that uncomfortable, scared and confused are three very different things – and not necessarily bad things, depending on the context. Given that we are in the middle of a global pandemic, I’m surprised the figures are so low – I think the nightly news has regular content that is scary, uncomfortable and confusing, frankly, especially for kids who are often going to take on the panic of this more than adults.

The question we might ask is how these three emotions translate to harm. How do we know that anyone has been harmed by being made to feel uncomfortable? This emphasis on ‘harmful content’ is super-emotive, but entirely meaningless unless we know what it is? Are we leaving it up to the survey respondents to define what is harmful? That doesn’t seem to make sense. Are we talking about sexual imagery? Violent imagery? Worrying news reports about daily death tolls? What? The press release doesn’t tell us, and so the claim that 47% of children and teens have seen harmful imagery is effectively nonsense – but a highly effective emotional nonsense. The closest to detail we get is that 24% of fourteen-year-olds see ‘harmful’ content every day – but again, if we don’t know what that content is, how can we reach any conclusion?

But this isn’t really about harm – it’s a publicity stunt and a grab for power. Read on in the press release and we’re told that “82% of parents and three quarters (73%) of children want to see trusted BBFC age ratings and ratings info displayed on user-generated content platforms like YouTube, so they can avoid content that might upset or disturb them.” Aha.

The release continues with BBFC Chief Executive David Austin saying “what a difference it would make, for example, if YouTube had well known, trusted BBFC age ratings created by those uploading or watching the video, that parents and young people recognise from the cinema, DVD and Blu-ray and Netflix, linked to filters.”

Well, it would certainly make a difference to the BBFC. But content filters already over-compensate for ‘unsuitable’ images, which is why so few people use them despite the government insisting on them being on by default, and how could we guarantee that the BBFC ratings would be applied correctly? After all, they’ve spent decades of telling us that only their highly-trained censors can negotiate the minefield of age ratings, and now they are now going to leave it to any old vlogger, video pirate or influencer to decide if their video should be PG, 12, 15 or 18 rated? Good luck with that.

The BBFC is very good at taking opinion polls that give it the results it wants. And the Board knows that a lazy and compliant press won’t dig deep into this, not when there are sensational headlines and a free story to be had. But let’s not be fooled – this is self-generated fake news, and the only harm we can be sure about here is to common sense.


Like what we do? Support us on Patreon so that we can do more!