Artificial intelligence is now part of everyday childhood. It shows up in learning platforms, social apps, games, search, and increasingly in health and wellbeing supports. For families navigating medical vulnerability and long recovery periods, AI also offers a genuine promise. It can personalise learning, reduce barriers to participation, and help children stay connected to school and peers when illness has interrupted their lives. However, the same technologies that create these possibilities are also used in ways that place children at risk. The question is no longer whether children will encounter AI. The question is whether we build and use it with child safety as a baseline condition.
A new global framework from the Safe AI for Children Alliance provides an important centre of gravity for this debate. The Alliance has set out three “non-negotiables” for any AI system that interacts with children. First, AI must never be allowed to generate fake or sexualised images of children. Second, it must never be designed to foster emotional dependency in children, for example, through manipulative companion experiences. Third, it must never encourage or facilitate self-harm. These three boundaries are presented not as optional best practices, but as enforceable minimum standards that should underpin product design, regulation, monitoring, and age assurance.
This framing aligns closely with the emerging international child-rights consensus. UNICEF’s policy guidance on AI for children treats safety, privacy, fairness, and transparency as rights obligations under the Convention on the Rights of the Child, and calls for governments and developers to actively protect children in any AI system that affects them. UNESCO makes a similar argument, emphasising that children’s rights and voices must be central to AI governance rather than an afterthought. Together, these positions signal a shift away from treating harms as isolated incidents and towards designing systems that assume children require stronger safeguards by default.
Australia is moving in the same direction, and quickly. In September 2025, the eSafety Commissioner registered new industry codes under the Online Safety Act to prevent children from being exposed to or engaged by sexual, violent, or self-harm content through generative AI chatbots and companion systems. The codes require risk assessment, mitigation, and reporting, backed by substantial penalties for non-compliance. This is a strong signal that child-focused AI regulation is not theoretical. It is already becoming part of the national safety architecture.
For Back on Track Foundation, this context matters. Our mission is to support children and families through educational recovery after cancer treatment. We see the value of AI for medically vulnerable children, particularly when it helps personalise learning, improve accessibility, and reduce school disruption. However, we also recognise that the benefits of AI only hold if the systems are safe, rights-aligned, and accountable from the start. Our SAFE AI approach means that any AI-enabled service we use, or build must sit behind clear safety guardrails consistent with the three non-negotiables. It must respect children’s privacy and dignity. It must be transparent, clinically and educationally responsible, and focused on wellbeing rather than engagement at any cost.
AI can be part of a stronger, fairer recovery pathway for children. To get there, we have to insist on rules that protect them first. The non-negotiables set a clear minimum for the world. Back on Track supports that line, and we will design on the right side of it.


