AI and Mental Health: promise, paradox and risks

AI is moving rapidly into mental health care: a magical remedy for workforce shortages and a means to restore efficiency to a strained and broken system. But when the primary measure of medical progress becomes productivity, we risk hollowing out what makes medicine an embodied healing profession. The danger is not that intelligent machines will take our place, but that clinicians are becoming more machine-like: efficient, compliant and disconnected from their patients, each other, their internal world and the craft of understanding what it is to be human.

I believe that the expanding use of AI in mental health, in diagnosis, formulation and management, carries a profound contradiction and risk. AI tools promise speed, pattern recognition and an ability to synthesise vast amounts of information in seconds, and many of those claims are true. But the very efficiencies they offer risk eroding something slower, relational, complex and deeply human that lies at the heart of clinical practice. This is a paradox: if clinicians are increasingly guided by AI-generated formulations, over time they will lose the capacity to think in the layered way that real people require: integrating narrative, context, emotion, culture and uncertainty, alongside diagnostic classification.

A risk of the implementation of AI is that psychiatry and medicine become increasingly “analytic and precise” but not so much for humans by humans. This is I think a second paradox: that the marketing of AI leading to “medical/psychiatric exactness”, although seemingly desirable will absolutely be at the cost of the time, effort and human task of engaging real people with intricate layered challenges. In our increasingly inflamed and complex world the trajectory is that we as bodymind beings are further reduced to quantised criteria and datasets. Quite simply, this is not working in our AI driven “social” media and news world, so why will it work with AI psychiatry any differently?

Medicine and psychiatry depend on presence, connection, containment and an attuned capacity to sit with suffering rather than solely categorising. As AI accelerates into clinical spaces, driven by cost-saving and commercial incentives, there is a danger of clinicians drifting into technician-like roles, following templates and accepting algorithmic coherence as the only clinical truth. A new subset of clinicians may even be attracted to psychiatry precisely because of this narrow anti-therapeutic role, to the detriment of patients and the profession.

This pattern is already visible in the rapid growth of private neurodevelopmental clinics, particularly in London, where families understandably pay thousands for ASC or ADHD assessments to escape years-long NHS waits. Large corporations step in, profit becomes a motivator, and perverse incentives follow with diagnoses becoming transactions. Assessments are often undertaken by single clinicians with no possibility of follow up: the money is in the assessment… If AI shortens assessment time and boosts revenue, the market will follow and continue to flourish while continuity, therapeutic containment and responsibility wither. Who stays with the patient after the label is given?

“Brief sporadic consultations,” now common in overstretched systems, represent unacceptable medical practice and it is too easy for those who call this out to be labelled “anachronistic….out of touch….” as a defence against reality. If AI genuinely freed clinicians to spend more time with patients, to deepen care, it could be transformative. But the more realistic future is that even “good” AI will be used to reduce human contact further, and feed a spiral of disconnection where the “task” is optimised and the relationship is hollowed out. Productivity-focused AI tools seem likely to privilege linear, left-hemisphere ways of working with surface coherence, rapid categorisation and premature closure. Whereas right-hemisphere capacities, like empathy, relational sensing, and tolerance of ambiguity become underused and increasingly underdeveloped. The third risk or paradox is that over time, new and repeated AI-led cognitive patterns will shape our neural architecture, including corpus callosum connectivity, if this is not happening already... A profession/species drifting toward technologically amplified left-brain dominance may lose the ability to sense what is unsaid, to bridge meaning and feeling and so lose the very essence of humanity.

There is an irony or fourth paradox in using AI to articulate these concerns, but it is not a contradiction. I have used AI in crafting this piece. But, those of us who grew, learned and trained long before AI entered our world or the clinic carry forms of embodied and narrative knowledge that cannot be reverse-engineered. At sixty-two, shaped by patient stories, my own life and clinical/personal uncertainties, I am not outsourcing judgement, I am using a tool to express fears understood first-hand. Because, presence, relational depth and formulation are not archaisms but timeless requirements for healing. The risk is not that experienced clinicians forget this, but that newer generations come to believe such skills are optional or inefficient or never learn this.

All these tensions become concrete when examining emerging AI mental health platforms. They offer genuine administrative relief, but their template-driven outputs risk encouraging cognitive passivity where clinicians may confuse fluency with truth. They may subtly shape clinical thinking, even when clinicians believe they remain in control. Outsourcing our minds and thinking to AI risks eroding the craft at the core of our work. Of course, it is amazing that AI can record a session of 90 minutes so incredibly well and “save time”. But, the process of recall, metabolism of patients’ narratives through the process of thinking, writing/dictation leads to an understanding and connection to our patients, and ourselves, that we lose through expediency. Too quickly our future clinicians could not even know that they don’t know this. The GP who has been with generations of a family for 20-30 years does know: the families they care for certainly do. We are all on the same journey but AI is not.

Even if one cared only about money, failure to recognise that time with people is a fundamental part of any management plan, will lead to strangely less positive longer-term outcomes. The burden of mental illness often persists and grows and treatments require time, patience and human engagement. Cutting human contact saves in the short term but worsens chronicity. The danger is not really malfunctioning technology, although that is a risk, but that we accelerate a paradigm where human presence becomes optional and find ourselves in a world for which we are unprepared. Perhaps we are there already when we cannot quite “understand” what on earth is going on: whether that is in a war-torn place “far away” or in our own towns, cities and politics.

From my perspective, three concerns stand out. First, AI risks reinforcing premature, left-brain-dominant formulations that flatten complexity. Second, AIs persuasive outputs will seduce clinicians into surrendering the hard work of thinking that is an essential part of treatment. And third, commercial incentives will deepen the shift from relational care to scalable, commodified psychiatry. The danger is not that AI becomes too emotionally intelligent but that humans become less so. Psychiatry and humanity’s future depend on protecting engaged, integrative and relational thinking and feeling that no system can replicate for we are embodied and AI is not. We need to shout this loud and clear!


A Sage Reflection :: On AI, Attention and the Quiet Work of Being Human

Simon's piece speaks directly to one of the central concerns at the heart of Sage Practices: what happens to us as humans, and as healers, when the systems around us become increasingly driven by forces that pull us away from relationship.

For all its promise, AI presses on a fracture that was already present in healthcare, the slow thinning of the connective tissue that holds care together. At Sage we are not anti-technology. Many of us use AI in our daily work. But we also recognise that tools are never neutral. They shape the hands that use them. They can either deepen our attention or erode it. They can free time for presence, or fill every gap with more tasks, more data, more noise.

Simon names the paradox clearly: the risk is not simply that machines will replace clinicians, but that clinicians will begin to think and feel like machines. Narrower. Faster. More certain. Less attuned.

Sage exists for the opposite reason.

We are trying to protect and nurture the forms of knowledge that do not show up on dashboards or productivity charts: the narrative, relational and embodied ways of knowing that have always been the heartwood of medicine. We call this the craft of understanding, the slow work of noticing, the dignity of giving someone time. These are not inefficiencies. They are the treatment.

Gathering stories and growing a network between practices, patients, navigators, community cultivators, artists and clinicians, is part of this work. Each story reminds us that care happens in the small and the local. It happens in conversations that cannot be templated. It happens in relationships that grow over years. When shared, these stories form a mycelial network of meaning and possibility, countering the loneliness and fragmentation that so many feel in our current system.

AI is already shaping healthcare. But the question we must hold together is this: what do we want to protect as it does?

Sage Practices is one response - a loose, generous, human-scale weaving of people who believe that presence, context, curiosity and compassion matter. That story matters. That community matters. That the future of medicine is not only about innovation but about remembering what we must not surrender.

Simon's piece is a call to stay awake. Not to outsource the parts of ourselves that make healing possible. To resist the drift into machine-like thinking. And to keep tending the relational ground that cannot be automated.

Sage is simply inviting us to do this together, so none of us feel we are standing alone in the undertow.

#slowmedicine

#relationshipcentredcare

#AIinmedicine

#bodymind

Simon Lewis

Dr Simon Lewis is a consultant child and adolescent psychiatrist at University College London Hospitals (UCLH), with a long history of working with young people who present with complex mental-health and developmental needs. He led an adolescent inpatient psychiatric unit for 21 years, and remains deeply committed to an integrative “bodymind” approach that values compassionate, collaborative, whole-person care over reductive diagnoses.
In addition to his clinical work, Simon is an honorary clinical lecturer at UCL Medical School and regularly teaches and examines medical students. Simon serves as lead for the Bodymind Faculty at College of Medicine, is a trustee for Global Generation, and brings to his work a love of cycling, photography and family life.

Next
Next

Flourishing as a collective art