Artificial intelligence (AI) supposedly has an answer for most of humanity’s problems – from the mundane (menu planning) to the galactic (mining on Mars). As conversations, and awareness (hopefully), pick up regarding mental well-being around World Mental Health Day (10 October), we ask, can AI solve this distinctly human (and so far, Earth-bound) problem?
The global mental health crisis has grown to alarming proportions. One in four people worldwide is estimated to suffer from a mental health condition. In the US, suicide became the second leading cause of death for kids between ages 10 and 14 in 2021. In Asia, recent incidents of shocking violence in Hong Kong and Thailand have been linked to mental health problems. Treatment and care can be hard to find and often prohibitively expensive, with a treatment gap of up to 90% in some countries. By one calculation, the global economic cost of mental disorders is about US$5 trillion or more than the GDP of most countries.
Can a problem created by the complex interaction of biological, chemical, social and cultural factors be within AI’s powers to fix? Can AI make us happier, or at the very least, less troubled?
Freud as chatbot
AI and psychotherapy have intersected since way back in the 1960s. One of the world’s first chatbots, Eliza, was designed as the alter ego of a psychotherapist by MIT scientist Joseph Weizenbaum to study human-machine interactions.
Today, the AI juggernaut is making deeper inroads into the mental health space, which has become part of the rapidly growing US$1.5 trillion-global wellness industry, offering a clear financial incentive for the tech industry.
AI-driven therapies use natural language processing and machine learning algorithms to create chatbots trained to stand in for real-life therapists and provide assistance 24/7. This has the potential to rapidly scale up access to mental health services – a boon for patients everywhere.
AI also has the potential to quickly diagnose mental health problems, detect behavioural changes faster, and deliver self-guided therapies efficiently. A recent study monitoring 4,500 users of Youper, an AI-powered mental health app in the US, claimed users experienced decreased symptoms of depression and anxiety after two weeks of using the app.
Empathy: No longer just a human response?
These findings may not seem too far-fetched when you consider that, in a recent conversation that I had with a chatbot about a hypothetical problem, the machine demonstrated empathy (“I can understand how frustrating it must feel to constantly face criticism”), validated experiences (“Remember, your feelings are valid”), affirmed my sense of agency, and provided suggestions on how to deal with the situation (“I understand that discussing this with your partner might feel challenging, but open communication is crucial in addressing family conflicts”).
It sounded like any ChatGPT response: plausible, but dry, or what Weizenbaum described as an “illusion of understanding.” But can a chatbot really consider the underlying nuances of human emotion? Can it effectively handle high-risk cases?
Consider this: human therapists can also tune in to what their clients are not saying, such as a person’s demeanour and body language. This is not possible on AI-driven platforms – at least not yet. Tellingly, the chatbot I interacted with also recognised its own limitations, suggesting I seek “the guidance of a professional family counsellor.”
There are also other concerns. Some researchers question the robustness and heterogeneity of the models behind these chatbots. For instance, should models using data from certain populations be used on people who do not belong to that group, as that could carry the risk of the AI delivering incorrect responses? Others point to issues about privacy and the fact that clinicians and researchers continue to deal with huge knowledge deficits – especially regarding mental disorders and how the brain works – which means any AI model will remain incomplete until such time that humans figure out the answers. There are also questions over the profit-driven motivations of digital mental health platforms.
Despite these challenges, given the scale of the global mental health problem, we cannot dismiss out of hand the value of AI-led intervention. Suicide prevention, medication management and discovery are promising areas. A recent Canadian study showed AI can effectively recognise people at high risk of suicide based on their social media data, potentially giving professionals timely information for more precise interventions.
The pursuit of happiness continues
If AI has its limits, what then of our quest for tech-driven mental well-being (or eternal happiness, as some would say)? Some researchers have pinned their hopes on implantable brain devices that can directly stimulate specific brain circuits as the last mile in keeping human emotions perpetually regulated. Recent lab breakthroughs aside, such interventions could be years away from going commercial.
There is, however, a hidden quality – other than reading between the lines and sussing out the meaning of things left unsaid – that we as humans come pre-programmed with to help each other deal with difficult situations, and AI cannot yet emulate. And that is compassion - described as “a strong feeling of sympathy with another person’s feelings of sorrow or distress,” and even 40 seconds of it has been shown to reduce anxiety levels. It takes many forms – like, taking the time to listen, or offering to bring lunch or a cup of coffee for someone having a stressful day.
The corporate world – a hotbed of stress – can benefit from compassion too. Compassion training has been shown to reduce perceived stress, symptoms of depression and anxiety in an organisational setting. (While this may sound fluffy, Stanford Medicine has been running a dedicated research centre since 2008.)
And lest we forget, compassion starts within, with us being kinder to ourselves when we experience difficulties or shortcomings (less negative self-talk, be more forgiving to yourself) and putting our experiences into the larger perspective of being human. Because, at the end of the day, while technology is a powerful enabler, it may be some time before we can delegate our mental health problems to AI-driven machines, no matter how sentient they may seem to us.
World-class content strategy and execution
Contact us to get started