Date
25 July 2025
Issue
How will the Online Safety Act 2023 affect consumer use of technology?
Short Answer
The Online Safety Act 2023 represents the UK’s most ambitious attempt to regulate digital platforms and tackle online harms.
Development of the Bill
Receiving Royal Assent in October 2023 and coming into force in July 2025, the Act imposes new legal duties on tech companies to prevent the spread of illegal content while protecting children from harmful material. Government ministers view it as a long-overdue reform to hold platforms accountable, introducing sweeping requirements around age assurance, content moderation, and risk assessments, overseen by Ofcom.
Ofcom has broad enforcement powers, including investigations and fines up to £18 million or 10% of global turnover. Platforms found sidestepping the law or facilitating circumvention through VPNs risk being restricted or blocked in the UK. Early Ofcom codes of practice specify technical and procedural standards covering moderation, privacy assessments, and user reporting mechanisms.
Analysis
The Act introduces a new statutory duty of care for online services. These services must take proportionate steps to reduce the risk of users encountering illegal and harmful content, from terrorism to cyberbullying. UK operators must embed risk management, age verification, and moderation into their product design to comply. Ofcom now serves as the primary regulator, and future case law will clarify key terms like "proportionate steps" and "highly effective" age verification. Legally, it marks a shift from traditional “safe harbour” protections under frameworks like the EU’s eCommerce Directive, requiring platforms to actively mitigate foreseeable harms without assuming blanket liability.
Industry reactions vary. Larger firms have adopted facial scans and document-based age checks; Reddit, X (formerly Twitter), and smaller platforms apply UK-only policies or age estimation; some international services block UK users altogether due to costs and uncertainty.
Its implementation has sparked controversy over privacy, feasibility, and unintended consequences, especially for platforms with interactive or user-generated content. As the UK becomes the first liberal democracy to mandate comprehensive age assurance, observers watch closely to see if the Act will set a global precedent or become obsolete.
The Online Safety Act is set to impact gaming platforms, with GTA 6 serving as a significant test case. Rockstar Games is preparing UK-specific age verification for features like voice chat and messaging to comply with Ofcom's rules. This goes beyond standard PEGI ratings and requires developers to implement real-time technical restrictions, which could affect gameplay and monetisation. While this raises concerns among players about fragmented experiences, it highlights a new era of legal compliance for game publishers.
Besides gaming platforms, the Act poses a significant challenge to social media and messaging platforms, particularly those with end-to-end encryption like WhatsApp and Signal. Both companies have previously threatened to exit the UK market rather than compromise user security for a small percentage of their global user base. This has caused public concern, as many online adults in the UK rely on WhatsApp. In contrast, platforms like Reddit and X appear to comply with the new rules, with some subreddits on Reddit now restricted to age-verified users. Although Reddit blocked 18+ subreddits some time ago using your account age, news subreddits such as r/Aljazeera and r/Israelexposed are now restricted to users who have not proven 18.
The Act also unexpectedly affected Spotify, a music, podcast, and video service. To access any music labelled 18+, users must first undergo a facial scan for age estimation. If the result from the facial scan is incorrect, users must provide ID to prove their age. Failure to meet the minimum age for Spotify's use in a region will result in account deactivation and deletion.
From August 13th, YouTube will use a user's search and viewing history to determine if they are over 18, regardless of the birth date on their account. If a user is suspected of being underage, they must provide either a facial scan or an ID to access restricted content, similar to how Spotify operates and how YouTube Kids works, but with the added requirement of age verification.
Challenges and Criticisms:
The Online Safety Act's age verification requirements are facing criticism. Critics are concerned that mandatory identity checks could lead to a "surveillance internet" and that outsourcing ID verification to third parties could result in data misuse and discourage free expression.
A glaring issue is the reliance on third-party providers for identity verification (e.g., Yoti, used by Spotify and Persona by Reddit and X). Since the government has not created a centralised data repository, this approach could lead to data leaks on a massive scale, similar to what was seen with the Tea app in the US. In addition to Persona, X uses two other providers for ID verification, one of which is Au10tix, which former Israeli intelligence officers founded. When X handed over their data to Au10tix in August 2023, controversy was sparked over the obvious privacy and security concerns warranted by giving user data to former members of Shin Bet. In addition, news about the war in Gaza is now being restricted for X users who have yet to verify their age.
The Act also requires platforms to provide tools for users to filter "legal but harmful" content. This has raised concerns about potential algorithmic bias, over-moderation, and the suppression of lawful speech. Furthermore, enforcement is complicated by the rise of tools like VPNs that help users bypass these rules, which critics argue may lead to platforms over-policing content and collecting more intrusive data than necessary.
Conclusion
The Online Safety Act 2023 is poised to be a defining digital law this decade. It will reshape platform duties around user safety, data governance, and content regulation while influencing tech compliance and design. Its success or failure depends on how platforms, regulators, and courts interpret its broad mandates in practice. For UK digital services, compliance is mandatory; ignorance is no longer a defence.