Ofcom is considering the use of automation to meet the digital protection requirements of the Safer Online Act as a potential boost for AI content moderation start-ups.
The UK’s media regulator is responsible for enforcing recently enacted legislation that strengthens online safety, particularly for children.
Ofcom said on Friday it “indicates plans to develop further proposals on how AI can be used to detect illegal content and harm to children”.
The communications watchdog said it would launch a consultation “later this year” on how automated detection tools could be used to prevent people from seeing harmful content.
Under the Online Safety Act passed last October, platforms hosting content are required to take strict measures to block content, including content that promotes bullying and suicide or self-harm.
The exploration of AI as a tool presents an opportunity for the growing market of AI startups in the content moderation game.
Among them is Unitary AI, a London-based startup developing machine learning technology that mimics human moderation to detect whether photos and videos contain harmful content. included.
The company raised significant funding last year with a £6.7m round in March 2023 and a £12m round in October.
As of the latest round, the company said it analyzes 6 million videos in multiple languages ​​every day.
Competitor Arwen AI, based in Surrey, similarly uses automated technology to prevent hate speech within social media.
Arwen has developed dozens of algorithms designed to detect various forms of potentially harmful content, including spam, profanity, insults, and hate speech.
Although automation is a much more scalable approach to content management, the system can incorrectly identify content as violating policy (known as false positives).
read more: Ofcom hires Big Tech staff to enforce online safety laws