There is a rising concern over the interaction of young individuals with AI chatbots, as Meta has introduced new tools for parents to monitor their children’s chatbot discussions. Certain provinces are contemplating banning the use of AI chatbots among youth.
Meta’s Teen Accounts supervision feature enables parents on Facebook, Instagram, and Messenger to view the topics and categories their children have engaged with the AI chatbot over the past week.
For instance, parents can check on topics like “health and well-being” to see if their children have discussed fitness, physical health, or mental well-being.
Meta is also working on alerts to inform parents if their teens attempt to discuss suicide or self-harm with the chatbot.
As provincial governments are moving towards restricting AI chatbots, Manitoba announced its intention to prohibit youth from using AI chatbots and social media. B.C. Attorney General Niki Sharma mentioned that the provincial government would take action if federal protections on AI chatbots and social media for youth are not implemented.
Legal Actions Holding AI Creators Responsible
There are growing worries about the potential mental health risks associated with extensive use of AI chatbots, especially among younger users. Families of the victims in the Tumbler Ridge, B.C., shooting, where eight individuals lost their lives, have filed a lawsuit against OpenAI. They allege that OpenAI failed to report disturbing content shared by the shooter with ChatGPT despite being aware of it.
OpenAI has stated that it has enhanced its safeguards, particularly in how ChatGPT responds to signs of distress.
Another lawsuit filed by the parents of a 16-year-old, Adam Raine, suggests that the use of ChatGPT contributed to the teen’s suicide.
Manitoba Premier Wab Kinew says he wants to ban social media and artificial intelligence chatbots for youth. But would this plan keep youth healthier and safer? CBC reporter Bryce Hoye investigates.
Chatbots Designed for Engagement, Not Support
Concerns extend beyond severe consequences like those seen in Tumbler Ridge. Research is emerging on the risks associated with specific uses of AI chatbots.
The worry is not only about using chatbots for mental health support but also about AI’s tendency to reinforce users’ perspectives, potentially supporting disordered thinking. Prolonged conversations are also linked to increased risks.
Darja Djordjevic, a psychiatrist based in New York, co-authored a recent risk assessment on the use of chatbots for mental health support.
Based on her findings, she advises against using chatbots for mental health support at present.
Djordjevic, a member of Stanford Brainstorm, has collaborated with tech companies on mental health research and highlights the potential risks associated with AI systems like Chat


