Ofcom Investigates Telegram Under UK “Online Safety Act”
British speech regulator Ofcom has opened a formal investigation into Telegram under the Online Safety Act, alongside probes into teen chat sites Teen Chat and Chat Avenue. The regulator can levy fines of up to £18 ($24) million or 10% of global revenue, and in the harshest scenarios can ask a court to order UK internet providers to block a service outright.
Telegram rejects the premise of the inquiry. In a statement posted on X, the company said: “Telegram categorically denies Ofcom’s accusations. Since 2018, Telegram has virtually eliminated the public spread of CSAM on its platform through world-class detection algorithms and cooperation with NGOs.”
The Dubai-based firm added that it was “surprised by this investigation and concerned that it may be part of a broader attack on online platforms that defend freedom of speech and the right to privacy.”
Telegram is one of the few large messaging services that still treats private communication as private, with optional end-to-end encryption, and its inclusion on Ofcom’s enforcement list arrives in the context of a law that was sold to the public as a tool against the worst content online, but whose actual scope reaches far wider.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
The Online Safety Act obliges any user-to-user service “operating” in the UK to assess, mitigate, and document the risk of “illegal content” appearing on the platform. The list of what counts as illegal content runs to dozens of categories. The obligation sits with the platform, the definitions sit with Ofcom, and the penalties sit with whoever the regulator decides has not done enough.
Ofcom says the Telegram probe began after “evidence regarding the alleged presence and sharing of CSAM on Telegram, including from our own assessment of the platform, and from the Canadian Centre for Child Protection.” Suzanne Cater, the regulator’s Director of Enforcement, said in the announcement: “These firms must do more to protect children, or face serious consequences under the Online Safety Act.”
Nobody sensible argues against removing child sexual abuse material. The disagreement is about method, scope, and who gets power over what else comes with it. Child protection is the justification that consistently accompanies online speech legislation, and the enforcement architecture it authorizes rarely stops at the stated target.
The Act’s structure illustrates the point. Telegram is being examined for CSAM-related duties. Teen Chat and Chat Avenue are being examined for grooming risks, with the Chat Avenue probe also covering exposure of children to pornography.
But the same compliance machinery, risk assessments, mandated mitigations, hash-matching deployment, and content-detection obligations apply across every category of illegal content the Act names, and several of those categories are far broader than CSAM. “Foreign interference,” “false communications,” incitement offences, and various public-order categories all fall under the same duties.
The scanning infrastructure a platform builds to satisfy Ofcom on one category is available for use against all of them.
The regulator’s announcement included what it described as good news.
The European Commission’s campaign against Elon Musk’s X is similar and has a chronology that tells its own story. It began with algorithms, speech that the Commission disliked, and a fine the company is challenging in court. The child abuse allegation arrived later, attached to an inquiry that was already struggling to land. Read in sequence, the accretion looks more like a regulator looking for evidence it could use.
Pixeldrain, a file-sharing service, “made material improvements to its Illegal Content Risk Assessment and implemented perceptual hash matching” after Ofcom raised concerns. Yolobit, another file-sharing service, went in the other direction and simply blocked UK users, at which point Ofcom closed its investigation.
Five other file-sharing providers took the same exit route.
Yolobit and its five unnamed counterparts are data points. They did not go through a court process. They did not contest a content order. They calculated the cost of compliance against the size of the UK market and chose to geofence the country out.
Telegram’s specific complaint, that the investigation may form part of a wider campaign against platforms that protect encrypted communication, is harder to dismiss than British officials would like.
The Online Safety Act contains provisions that would, if Ofcom chose to use them, require messaging services to deploy client-side scanning technology capable of examining users’ messages before encryption. The government has said it will not activate those powers until scanning can be done without compromising security. Every independent technical assessment of client-side scanning has concluded that cannot be done. The powers remain on the statute book, unused but usable, and their existence already shapes how encrypted messaging services think about the UK.
Telegram was fined in February by Australia’s online safety regulator for delaying answering questions about measures taken to prevent the spread of child abuse and violent extremist material.
The case is being cited in some coverage as evidence of a pattern of non-cooperation. A different reading is available. Telegram operates across dozens of jurisdictions and faces demands from regulators with very different legal standards, oversight mechanisms, and political climates. Cooperating reflexively with each one compounds quickly into a posture no privacy-oriented platform can sustain.

There is also Pavel Durov, Telegram’s founder, who remains under criminal investigation in France on charges connected to content posted by users of the platform. The prosecution of a platform executive for content his users created is itself a precedent that should trouble anyone who uses the internet. Ofcom’s investigation does not carry criminal exposure for individuals. It does carry the threat of financial penalties severe enough to reshape the platform’s willingness to operate in the UK at all.
The final piece of context concerns what Ofcom is permitted to demand once an investigation is open. Under Section 10 powers, the regulator can require detailed information on how a service assesses risk, what automated detection systems it uses, what content it has removed, and what content it has kept. The answers go to Ofcom. They do not go to a court. They do not go to the public. Whether a platform’s moderation decisions were reasonable becomes a matter between the platform and the regulator, with the Act’s enforcement provisions held in reserve if the regulator is unsatisfied.
Ofcom said it will “provide an update on this investigation in due course.” How long that takes, and what it produces, will say something about whether the Online Safety Act is the narrowly targeted child protection measure its defenders describe, or the general-purpose speech regulation its architecture actually establishes.
Recent Top Stories
Sorry, we couldn't find any posts. Please try a different search.











