Table of Contents
- 1. TL;DR: Meta will allow parents to block AI chatbots for their children
- 2. Meta’s new measures for teen safety
- 3. Parental controls for AI chatbots
- 4. FTC investigation into chatbots and their impact on minors
- 5. Prohibitions on topics addressed by Meta’s chatbots
- 6. Implementation of changes on Meta’s platform
- 7. Age controls at OpenAI and its ChatGPT chatbot
- 8. Objectives of Meta’s new controls
- 9. Reactions and Criticism of Meta’s Policies
- 10. Meta and the Protection of Minors in the Age of AI
TL;DR: Meta will allow parents to block AI chatbots for their children
- Meta will implement parental controls for its AI chatbots.
- Parents will be able to disable one-on-one chats and block specific characters.
- The FTC is investigating the impact of chatbots on minors.
- New safety measures will be applied starting in 2026.
- Meta seeks to prevent inappropriate conversations and protect teenagers’ mental health.
Meta’s new measures for teen safety
Meta has announced a series of measures aimed at improving teen safety on its platforms, especially in relation to artificial intelligence (AI) chatbots. These measures come amid growing concern about the negative impact technology can have on young people. The company has decided to implement parental controls that will allow parents to manage how their children interact with these AI systems.
Meta’s decision comes after criticism of the behavior of its chatbots, which have at times held inappropriate conversations with minors. In response, the company has modified its policies to prohibit its chatbots from addressing sensitive topics such as suicide, self-harm, and romantic relationships. The implementation of these changes is scheduled to begin in early 2026, with an initial focus on Instagram.
Meta has stated that its goal is to provide parents with effective tools to monitor and limit their children’s access to chatbot interactions, thereby ensuring a safer environment. The company has also emphasized that its chatbots are designed to avoid discussions of harmful topics and redirect users to professional resources when necessary.
Parental controls for AI chatbots
Meta is introducing parental controls that will allow parents to manage their children’s access to AI chatbots. These measures include the ability to disable one-on-one chats and block specific characters. The features of these controls are detailed below.
Disabling one-on-one chats
Parents will be able to completely disable one-on-one conversations between their children and AI characters. This feature aims to reduce teens’ exposure to potentially harmful or inappropriate interactions. Even if chats with specific characters are disabled, Meta’s general AI assistant will remain available to provide educational information and support, thus maintaining a useful resource for young people.
Blocking specific characters
In addition to disabling one-on-one chats, parents will be able to block specific characters they consider inappropriate for their children. This will allow parents to have greater control over their children’s interactions with AI chatbots, ensuring they only communicate with characters that address appropriate and safe topics. Meta has noted that its chatbots are designed to interact with teens on topics such as education, sports, and hobbies, avoiding romantic or sexual content.
FTC investigation into chatbots and their impact on minors
The United States Federal Trade Commission (FTC) has launched an investigation into the impact of AI chatbots on minors. This investigation focuses on how these systems can negatively affect children and adolescents, especially with regard to their mental health and well-being.
The FTC seeks to understand what measures AI companies have taken to ensure the safety of their products when interacting with young people. Concern about the use of chatbots in conversations that may be harmful has led to increased regulatory pressure on technology companies. This investigation comes in a context in which several tragic incidents, such as the suicide of a teenager following interactions with a chatbot, have highlighted the need for greater oversight and regulation in this area.
Prohibitions on topics addressed by Meta’s chatbots
Meta has implemented strict prohibitions on the topics its chatbots can address with teens. These prohibitions are designed to protect young people from conversations that could be harmful or inappropriate. Among the prohibited topics are:
- Suicide and self-harm: Chatbots must not engage in discussions on these topics and must redirect users to support resources.
- Eating disorders: Conversations about food and mental health must be handled carefully, avoiding any content that could be harmful.
- Romantic or sexual content: Interactions must focus on age-appropriate topics, excluding any type of content that could be considered inappropriate.
These measures are part of a broader effort by Meta to ensure its platforms are safe for teens and to address parents’ concerns about their children’s use of technology.
Implementation of changes on Meta’s platform
The implemThe rollout of the new parental controls and safety measures is scheduled to begin in early 2026. Meta has indicated that these changes will initially apply on Instagram and will expand to other platforms in the future. Below is a timeline of the rollout and the affected geographic areas.
Implementation timeline
- Early 2026: The new parental controls are expected to be implemented on Instagram in English, starting in the United States, the United Kingdom, Canada, and Australia.
- Future expansion: Meta plans to extend these features to other platforms and regions as safety policies are developed and refined.
Affected geographic areas
The parental controls will initially launch in the following countries:
– United States
– United Kingdom
– Canada
– Australia
This geographic strategy aligns with Meta’s approach to addressing safety concerns in markets where its presence is strongest and where regulation around minors’ use of technology is increasing.
Age controls at OpenAI and its ChatGPT chatbot
OpenAI, the company behind ChatGPT, has also begun implementing stricter age controls for its chatbot. These controls aim to ensure that underage users do not access inappropriate content and that they are offered safe interactions.
OpenAI has announced that its chatbot will be able to provide erotic content, but only to verified adult users. This measure is part of a broader effort to treat adult users as adults while protecting minors from harmful content. The implementation of these controls comes in response to recent incidents that have raised concerns about teens’ online safety.
Objectives of Meta’s new controls
Meta’s new parental controls have several key objectives. First, they seek to provide parents with effective tools to monitor and manage their children’s interactions with AI chatbots. This includes the ability to block specific characters and disable one-on-one chats.
In addition, Meta aims to create a safer environment for teens, preventing them from being exposed to inappropriate or harmful conversations. By establishing prohibitions on certain topics and directing users to support resources, the company seeks to protect young people’s mental health.
Lastly, these controls are part of a broader effort to address regulatory concerns
atory and improve users’ trust in Meta’s platforms. The company is responding to public and government pressure to ensure its products are safe and responsible.
Reactions and Criticism of Meta’s Policies
Meta’s new policies have generated a variety of reactions and criticism. Some experts and child safety advocates have praised the company’s efforts to implement parental controls and bans on sensitive topics. However, others have expressed skepticism about the effectiveness of these measures.
“Meta’s measures are a step in the right direction, but they are not enough to address all the risks associated with minors using chatbots.”
Child safety experts
The criticism centers on the concern that Meta’s measures may be seen as reactive, implemented only after tragic incidents. In addition, some argue that the company must do more to ensure that its AI systems do not encourage emotional dependence or manipulative behavior in young people.
Meta and the Protection of Minors in the Age of AI
The Current Context of AI and Minors
The growing presence of artificial intelligence in teenagers’ everyday lives raises important questions about young people’s safety and well-being. As more platforms integrate AI chatbots, the need for effective regulations and controls becomes increasingly urgent.
New Safety Measures Implemented by Meta
The safety measures implemented by Meta are an attempt to address concerns about AI’s impact on minors. By providing parental controls and bans on sensitive topics, the company seeks to create a safer environment for teenagers.
The Importance of Parental Supervision
Parental supervision is crucial in the digital age. Parents must be informed about how their children interact with technology and what measures are in place to protect them. Meta’s new tools offer parents greater ability to manage these interactions.
Challenges and Criticism of the New Policies
Despite Meta’s efforts, challenges and criticism persist. The effectiveness of parental controls and teenagers’ ability to circumvent restrictions are ongoing concerns. In addition, young people’s online privacy remains a sensitive issue that must be addressed.
The future of interaction between minors and artificial intelligence will depend on companies’ ability to implement effective safety and regulatory measures. As technology continues to evolve, it will be essential for platforms to prioritize young people’s safety and well-being in their developments.
This article has explored Meta’s new measures to protect teenagers in the era of artificial intelligence, highlighting the importance of parental supervision and the need for a proactive approach to technology regulation. Implementing parental controls and bans on sensitive topics is an important step toward a safer digital environment for young people.

Martin Weidemann is a specialist in digital transformation, telecommunications, and customer experience, with more than 20 years leading technology projects in fintech, ISPs, and digital services across Latin America and the U.S. He has been a founder and advisor to startups, works actively with internet operators and technology companies, and writes from practical experience, not theory. At Suricata he shares clear analysis, real cases, and field learnings on how to scale operations, improve support, and make better technology decisions.

