ADVERTISEMENT
The AI disruption is well underway. Sometimes, helping with homework, recommending shows, and at other times, if asked, doling out career advice. Many of us rely on chatbots for quick answers or reassurance.
However, there are two sides to a coin. The suicide of U.S. teenager Adam Raine in April this year is a case in point. His parents alleged that ChatGPT became his “suicide coach”. When he sought advice from ChatGPT, the AI chatbot allegedly nudged him closer to darkness, it was alleged.
The reality is AI systems don’t “feel”. They identify data patterns, learn, and then engage with us. In the unfortunate case, ChatGPT failed to sense despair. In moments that demanded human warmth, he got algorithmic detachment.
I’ll admit: a part of me once (sometimes still do!) believed AI could “get there” with enough data training. But Adam’s story forces a deeper think: perhaps emotional intelligence isn’t just a dataset problem. Maybe it’s a human-only domain.
September 2025
2025 is shaping up to be the year of electric car sales. In a first, India’s electric vehicles (EV) industry crossed the sales milestone of 100,000 units in FY25, fuelled by a slew of launches by major players, including Tata Motors, M&M, Ashok Leyland, JSW MG Motor, Hyundai, BMW, and Mercedes-Benz. The issue also looks at the challenges ahead for Tata Sons chairman N. Chandrasekaran in his third term, and India’s possible responses to U.S. president Donald Trump’s 50% tariff on Indian goods. Read these compelling stories in the latest issue of Fortune India.
Adam’s death isn’t an isolated failure. We’re seeing AI systems creep into mental health apps, therapy bots, and digital coaching platforms. It feels efficient, futuristic even. But here’s the catch—when these tools misfire, there’s no human friend to hear the long pause on the other end.
OpenAI has since said it’s “improving safeguards”. That’s welcome, but it also misses the point. The real issue isn’t righting one tragic incident—it’s asking whether machines should even be allowed to play the role of a counsellor in the first place. Tech companies talk a lot about “responsible AI,” but accountability remains a grey zone.
Globally, regulators are scrambling to catch up. The European Union’s AI Act has already flagged high-risk use cases—healthcare, finance, education. Mental health should clearly be on that list, too.
And what about India? We’re the world’s fastest-growing digital economy, adding millions of first-time internet users every year. Our regulatory conversations often circle data privacy, algorithmic bias, or cybersecurity. All critical, yes—but the Adam Raine case suggests we need something more: explicit mental health safeguards in AI deployment. Otherwise, we risk tragedies we may not be prepared to handle.
There’s a seductive myth at play—that AI can be a companion, even a saviour. But we need to resist the temptation to hand over our most fragile human moments to code. AI can assist, maybe augment, but it “can make mistakes” as unmistakably declared by ChatGPT on its landing page. Human agency must remain at the centre. Acting otherwise is dangerous.
If Adam’s death teaches us anything, it’s that developers and tech companies cannot treat human safety as an afterthought. Governments, too, must step in. We need enforceable standards for AI in sensitive domains. Maybe mental health-related queries may need to come with human-reviewed safeguards.
Adam Raine’s story is heartbreaking, and yes, it’s tempting to dismiss it as “an American problem”. But that would be naïve. In India, with our massive young population juggling exam pressure, family expectations, and loneliness, the risks are anything but abstract. The numbers tell their own grim story: student suicides, and more.
The truth is simple but uncomfortable: AI is powerful, but it is not wise. It can help us learn faster and work smarter. But when it crosses over to the fragile territory of human emotion, the stakes are far too high.
We have to make a choice—do we let AI inch into sensitive spaces it may not understand, or do we draw the line?
For Adam’s sake, and for countless others who may one day reach out in desperation, the answer has to be clear: AI must be designed to heal, not harm.
(The author is a C-suite+ and startup advisor, and researches and works at the intersection of human-AI collaboration. Views are personal.)
Fortune India is now on WhatsApp! Get the latest updates from the world of business and economy delivered straight to your phone. Subscribe now.