Safety by Design
A practical alternative approach to online harm
Online harm is a real safety issue, especially for young women, girls, gender-diverse people, and marginalised communities.
This page explains the Online Safety (Duties for Providers of Internet-Based Services) Bill in clear language: what it is, what it would require of platforms, how it differs from an under-16 ban approach, and why YWCA is advocating for a platform responsibility model.
What is this bill?
The Online Safety (Duties for Providers of Internet-Based Services) Bill is an Opposition Member’s Bill introduced by Labour MP Reuben Davidson (late 2025). It is positioned as a political alternative to proposals that focus on banning under-16s from social media. Instead of focusing on young people as “the problem”, this Bill focuses on the design of platforms and the way large online services operate.
The Core Idea: Safety by Design
Safety by design means online platforms should be safe to use by default, in the same way products like cars are required to include safety features.
The Bill’s underlying argument is:
-
the problem isn’t young people using the internet
-
the problem is platforms designed to maximise engagement (keep people scrolling), even when that creates predictable risks and harms
What the Bill would require platforms to do
Duty of Safe Use
Platforms would be required to take reasonable steps to minimise harmful content, including preventing exposure to illegal or dangerous material (for example, extreme violence or content promoting eating disorders).
Duty of Safety by Design
Safety would need to be built in from the start. When a company designs or changes features or algorithms, it would need to assess whether those choices could harm users, and prevent harmful design outcomes.
Duty of Safety Transparency
Platforms would be required to publish clearer information about how their algorithms work and what steps they take to keep users safe, rather than hiding behind “commercial sensitivity”.
Who Would Enforce It?
The Bill would likely empower a regulator (such as the Commerce Commission or a new Online Safety Commissioner) to enforce these duties, including through penalties if platforms fail to design and operate safely.
How this differs from an under-16 ban approach
There are different approaches being discussed in Aotearoa. This Bill is positioned as an alternative to an approach that focuses on restricting access by age.
A simple way to think about the difference:
- Age-based restriction approach: “You must be this tall to ride.” It focuses on who can access the platform.
- Safety-by-design approach (this Bill): “The ride must not have sharp edges.” It focuses on making the platform safer for everyone, regardless of age.
YWCA’s advocacy focuses on platform responsibility, because safer design protects young people, adults and communities most targeted by online harm.
Current status and why the Bill still matters
As an Opposition Member’s Bill, this Bill is unlikely to become law unless the Government changes its position, orelements are adopted into Government legislation.
Even so, the Bill matters because it:
- Sets out a clear and practical model for regulating platforms;
- Strengthens the public debate by offering a credible alternative approach;
- Creates a pathway for key duties to be adopted into future legislation.
What we can learn from other countries.
Other countries have already moved beyond voluntary codes and age-only restrictions, and are requiring online platforms to take responsibility for safety by design. These international examples show both what works and what can learn from, helping inform what a strong Aotearoa approach could look like.
Australia
Australia was one of the first countries to establish a dedicated online safety regulator through the eSafety Commissioner.
Under Australia’s model, platforms are required to meet Basic Online Safety Expectations and to explain how they are keeping users safe. This has shifted online safety from a voluntary exercise to a regulated responsibility.
What has changed in practice:
- Platforms can no longer rely on vague or generic claims about safety.
- The regulator can compel companies to answer detailed questions about their safety systems.
- Companies can be fined when they fail to provide adequate information or strip back safety capability.
Real-world outcome:
In 2023–2024, the regulator fined X (formerly Twitter) after it provided inadequate responses about its child safety practices following major staffing cuts. This exposed how quickly safety features can be weakened when companies prioritise cost-cutting over user safety.
What this shows:
Safety-by-design laws don’t just remove content; they expose risk, enforce accountability, and deter negligence, even when platforms resist change.
United Kingdom
What has changed in practice:
- Social media platforms introduced new default safety settings for teenagers.
- Features that encourage endless scrolling or amplify harmful content were reassessed.
- Platforms were forced to invest more heavily in safety teams and systems.
Real-world outcomes:
- Instagram introduced “Teen Accounts”, making accounts private by default for under-16s.
- Major pornography sites introduced strict age-verification systems.
- Early data showed a significant drop in traffic to high-risk sites once safeguards were required.
The trade-off:
The UK experience also shows that stronger regulation raises complex questions about privacy and age assurance. These debates underline why thoughtful, transparent design requirements matter, not just blunt restrictions.
Transparency has Revealed New Zealand is Treated Differently
International transparency reporting has revealed a stark reality: users in New Zealand and Australia often receive weaker safety protections than users in Europe. Because stronger laws apply in the EU and UK, European users can opt out of personalised, addictive algorithmic feeds. Platforms are also required to disclose how risks are identified and mitigated in the EU and human moderation is prioritised in larger, regulated markets. In contrast, in smaller markets like New Zealand rely more heavily on automated moderation and local context and language are often poorly understood. Safety features available elsewhere i.e. the EU are not always rolled out here in NZ either.
Why this matters for Aotearoa New Zealand
These international examples show that:
- Platform behaviour changes when regulation requires it;
- Safety-by-design approaches can lead to real product changes;
- Transparency obligations expose risks that voluntary reporting hides.
The Online Safety (Duties for Providers of Internet-Based Services) Bill draws directly on these lessons. It aims to ensure online platforms operating in Aotearoa are held to clear, enforceable safety expectations, rather than relying on voluntary promises or placing responsibility on individuals after harm occurs.
You can make a difference by:
- Joining our campaign and signing the petition.
- Contacting your local MP, use our templates to make it easier.
- Sharing the message with your friends and community.