Ofcom report found 1 in 5 harmful search results were “one-click gateways” to more toxicity

Advertisement: Click here to learn how to Generate Art From Text

Move over TikTok. Ofcom is the U.K. regulator that enforces the Now officialOnline Safety Act is preparing to take on an even larger target: search engines such as Google and Bing, and their role in presenting harmful content, including suicide, self-injury and other harmful acts, at the click or a button. This is especially true for underage users.

A report commissioned by Ofcom and produced by the Network Contagion Research Institute found that major search engines including Google, Microsoft’s Bing, DuckDuckGo, Yahoo and AOL become “one-click gateways” to such content by facilitating easy, quick access to web pages, images and videos — with one out of every five search results around basic self-injury terms linking to further harmful content.

The research is timely because much of the recent focus on harmful content online has been around the use and influence of walled garden social media sites such as InstagramThe following are some examples of how to get started: TikTok. This new research represents a first step for Ofcom in understanding and gathering evidence if there is a greater potential threat. Open ended sites like Google.com attract over 80 billion visits each month, while TikTok has a monthly active user base of approximately 1.7 billion.

“Search engines are often the starting point for people’s online experience, and we’re concerned they can act as one-click gateways to seriously harmful self-injury content,” said Almudena Lara, Online Safety Policy Development Director, at Ofcom, in a statement. “Search services need to understand their potential risks and the effectiveness of their protection measures – particularly for keeping children safe online – ahead of our wide-ranging consultation due in Spring.”

Ofcom reported that researchers analysed 37,000 links in the results of these five search engines. Using both common and more cryptic search terms (cryptic to try to evade basic screening), they intentionally ran searches turning off “safe search” parental screening tools, to mimic the most basic ways that people might engage with search engines as well as the worst-case scenarios.

The results of the study were in many respects as bad and as damning as one might expect.

Not only were 22% of search results links to harmful content (including self-harm instructions) available with just one click, but this content also accounted 19% of links on the first pages (and 22% in the top links).

Researchers found that images searches were especially egregious. A full 50% of them returned harmful content, followed closely by web pages (28%), and videos (22%). The report concludes, that search engines may not be able to screen out some of these images because they may confuse them with legitimate media and medical imagery.

The cryptic terms also evaded the screening algorithms better: they made it six-times more likely that an individual would reach harmful content.

The role that generative AI search engines might play in this area is not discussed in the report. However, it is likely to become an issue in the future. So far, there seems to be more controls in place that prevent platforms like ChatGPT for being misused. The question is whether users will learn how to game this, and where that could lead.

“We’re already working to build an in-depth understanding of the opportunities and risks of new and emerging technologies, so that innovation can thrive, while the safety of users is protected. Some applications of Generative AI are likely to be in scope of the Online Safety Act and we would expect services to assess risks related to its use when carrying out their risk assessment,” an Ofcom spokesperson told TechCrunch.

It’s not all a nightmare: some 22% of search results were also flagged for being helpful in a positive way.

Ofcom may use the report to better understand the issue, but it also gives search engine providers a heads-up on what they need to do. Ofcom has Already clearTo say that children would be its first priority in enforcing Online Safety Bill. In the spring, Ofcom plans to open a consultation on its Protection of Children Codes of Practice, which aims to set out “the practical steps search services can take to adequately protect children.”

This will include taking steps that minimize the likelihood of children being exposed to harmful content on sensitive topics such as suicide or eating disorders, across the entire internet, including search engines.

“Tech firms that don’t take this seriously can expect Ofcom to take appropriate action against them in future,” the Ofcom spokesperson said. Ofcom has said that it will only use fines as a last option. In the worst case scenario, court orders could be issued to force ISPs to block services that don’t comply with rules. There could be Criminal liabilityFor executives who supervise services that violate the rules.

So far, Google has taken issue with some of the report’s findings and how it characterizes its efforts, claiming that its parental controls do a lot of the important work that invalidate some of these findings.

“We are fully committed to keeping people safe online,” a spokesperson said in a statement to TechCrunch. “Ofcom’s study does not reflect the safeguards that we have in place on Google Search and references terms that are rarely used on Search. Our SafeSearch feature, which filters harmful and shocking search results, is on by default for users under 18, whilst the SafeSearch blur setting – a feature which blurs explicit imagery, such as self-harm content – is on by default for all accounts. We also work closely with expert organisations and charities to ensure that when people come to Google Search for information about suicide, self-harm or eating disorders, crisis support resource panels appear at the top of the page.”  Microsoft and DuckDuckGo has so far not responded to a request for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *