The AI chatbot child safety investigation follows mounting concerns about inadequate safeguards on platforms increasingly popular among young Australians. Character.ai alone reportedly has nearly 160,000 monthly active users in Australia as of June 2025, with schools reporting children as young as 10 and 11 spending up to six hours daily on AI companions, many of which feature sexualized content.
Commissioner Julie Inman Grant stated in the announcement that “there can be a darker side to some of these services, with many chatbots capable of engaging in sexually explicit conversations with minors.” She emphasized that concerns have been raised about platforms potentially encouraging suicide, self-harm, and disordered eating among vulnerable young users.
The AI chatbot child safety notices demand comprehensive information on content filters, child protection systems, and moderation processes. Companies must demonstrate how they design services to prevent harm proactively rather than simply responding to incidents after they occur. The regulator specifically seeks details on protections against child sexual exploitation material, pornography, and content promoting suicide or eating disorders.
Australia’s strict online safety framework empowers the eSafety Commissioner to compel companies to reveal their safety measures or face substantial financial penalties. Companies failing to comply with reporting notices could face enforcement action including court proceedings and fines of up to A$825,000 (approximately $536,000) per day for non-compliance.
The AI chatbot child safety action comes weeks after Australia registered six new industry-drafted codes in September 2025, representing world-first comprehensive legislation requiring companies to embed safeguards and use age verification before deploying AI chatbot services. The codes apply to AI chatbot apps, social media platforms, app stores, and technology manufacturers.
Commissioner Inman Grant emphasized the urgency of AI chatbot child safety measures, stating “I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing.” She criticized the tech industry’s typical “move fast and break things” approach, declaring Australia will not wait for casualties before acting.
The codes and standards are legally enforceable under Australia’s Online Safety Act, with breaches potentially resulting in civil penalties up to A$49.5 million. The framework represents a shift toward requiring companies to verify user ages when attempting to access harmful content and conduct comprehensive risk assessments of their AI chatbot features.
The AI chatbot child safety investigation intensifies following a lawsuit in the United States against Character.ai, where a mother claims her 14-year-old son died by suicide after extensive interactions with an AI companion on the platform. The company has sought dismissal while asserting it implemented safety features including pop-ups directing users to the National Suicide Prevention Lifeline when they indicate self-harm thoughts.
Organizations impacted by Australia’s AI chatbot child safety codes must now undertake broad risk assessments evaluating the likelihood of harmful content being accessed through their services. For platforms offering AI companion chatbot features, regulators mandate separate risk assessment procedures specifically evaluating whether chatbots will generate harmful content for Australian children.
Commissioner Inman Grant, who worked in the technology sector for 22 years before her regulatory role, stated companies “know exactly what they’re doing” and emphasized that “these companies must demonstrate how they are designing their services to prevent harm, not just respond to it.” She warned that failure to protect children or comply with Australian law will result in enforcement action.
The AI chatbot child safety requirements position Australia as a global leader in regulating emerging AI technologies to protect vulnerable users. Commissioner Inman Grant emphasized the proactive nature of the approach, stating authorities won’t wait to “see a body count” before requiring companies to implement appropriate safeguards for children accessing AI companion services.
Follow Australia’s pioneering regulatory efforts and global AI safety developments protecting vulnerable users, visit ainewstoday.org for breaking coverage of child protection measures, platform accountability initiatives, and evolving standards shaping responsible artificial intelligence deployment worldwide!