While Artificial Intelligence (AI) is making human life easier, its other aspect is also giving many dangerous signals. With the emergence of generative AI, the dangers associated with this technology have increased even more. Recently, a tragic incident in Florida (USA) has drawn everyone's attention to the dark world of AI.
What happened in Florida : - In February 2024, 14-year-old Sewell Setzer III allegedly committed suicide after interacting with Character.ai's AI chatbot. The victim's mother Megan Garcia filed a lawsuit against the AI company, claiming that the chatbot provoked her son to suicide. Now the court has allowed legal action against both Google and Character.ai in this case.
What does the court say : - US District Judge Anne Conway said that as of now, these companies could not prove that free speech exempts them from this suit. This case could become the first case in the US to hold companies accountable for mental harm caused to children due to AI.
Why is the concern increasing : - Sewell began to feel more attached to the AI chatbot and this started deteriorating his mental state. This incident showed that conversations with AI can deeply affect the mental health of minors. Both Character.ai and Google have said that they are now adding new safety features to protect children on the platform, such as preventing self-harm conversations.
What is Google's role : - Google claimed that it has no direct connection with Character.ai, but the court rejected this argument. The reason is that Character.ai's AI system is based on Google's LLM. Apart from this, Character.ai was also founded by two former Google engineers, which makes Google accountable.
Read more : -
Vietnam Moves to Ban Telegram: A Crackdown on 'Toxic' Content

0 Comments