Practical Considerations for the use of AI in Clinical Practice
One of the leading voices on the frontiers of artificial intelligence, Dario Amodei, recently published an essay entitled “The Adolescence of Technology”. In this essay, Amodei, the CEO of Anthropic (the company behind the AI tool Claude), considered some of the ethical implications of where we are developmentally in the creation and use of AI tools. Amodei sees us as being at a turning point in determining what it means to develop thoughtful and mature control systems in the practical use of AI that will help to safeguard against misuse and the potentially disastrous consequences that could come from AI run amok or being used to nefarious ends by bad actors with disproportionate access to the power and resources to negatively impact large swaths of humanity.
This is not Amodei’s first foray into the realm of public discourse on this topic. Amodei frequently engages in conversations in public spaces. Additionally, another well-known essay of his “Machines of Loving Grace” is a touch point for those interested in an optimistic evaluation of where AI might be taking us. Now, a little more than a year on, Amodei revisited these thoughts due to the unprecedented progress he has observed in these models since he published the earlier essay. AI tools are now at a point where they are writing the code used for the development of subsequent releases and upgrades. This development should be understood for what it is: an accelerant that could drive the exponential growth of these models and what they can produce.
To ground it in a frame of reference that might make sense, in the science fiction around this topic the inflection points for the leap forward in these technologies have always been the point at which computers and machines were sophisticated enough to be responsible for the next series of advancements in other computers and machines. We are now at that point in time. By Amodei’s own admission, Claude now writes much of the code that goes into Claude. AI models are already more than capable at engaging in complex levels of thinking around computing, the hard sciences, math, and producing at least passable visual art and writing. Given these advancements, I am largely in agreement with Amodei’s sense that something big is on the horizon and that collectively we should be thinking critically about how, why, and to what ends we want to be engaging in this technology.
Psychotherapists seem mixed on this topic. As evidenced by the abundance of platforms that help facilitate the use of AI in their practice for simple matters—such as recording sessions for documentation purposed, schedule management, and keeping up with the other cognitive demands of managing a case load—at least some portion of therapists seem open to the benefits of what AI can offer with respects to minimizing administrative tasks and the logistical burdens of the work. However, on the other extreme, there is acknowledgement of the risk the intrusion of AI into the practice of actually providing psychotherapy presents. Worries now exist about people either using the LLMs with which most of us are familiar (ChatGPT, Claude, Gemini) to provide what they think to be psychotherapy, or that mental health specific chatbots developed and marketed specifically to that purpose (think platforms like Woebot, Wysa, and Youper) might take over the market place by replacing human therapists with a diminished version of what psychotherapy is meant to be. These platforms, unsurprisingly, are almost exclusively designed to utilize theoretical approaches like CBT, DBT, and mindfulness-based interventions.
I will cut to the chase, as far as whether therapists ought to be willing to consider ways in which AI might be able to positively support their practice, I do not think we are at a point where we have very much choice in the matter. And I say that against my own temperament and as having been someone who was quick to dismiss these models as either a fad or ethically compromising (or both) when they were first released. I am generally much more of a Luddite when it comes to my own willingness to participate in the use of new technologies in most domains of my daily life. When it comes to the use of most advances in technology, I tend to air on the side of “less is more” and worry that overuse can quickly divorce us from a relationship to aspects of being that are an essential part of the human condition. It is not lost on me that, as a collective, humans in “technologically advanced” societies are often more insular, more lonely, sadder, more anxious, more conspiratorial, and more susceptible to manipulation and negative influence than they were prior to said advancements. At least that is the perception.
However, I also think, with the direction things are going in, that AI is a technology that we cannot ignore. Yes, we can be cynical and assume that Amodei and others on the frontline of the development of these tools stand to benefit from talking up the world-shattering nature of these technologies. Maybe it is mostly pomp. But artificial intelligence, with its ability to speed up, enhance, and think beyond the limits of practical human intelligence could be a paradigm shift of an order and magnitude we haven’t seen likely since the beginning of the Industrial Revolution. As much as computing technologies have already changed the world we live in, they were primarily limited in their capacity to be technologies that redistributed information that a.) already existed and b.) was created exclusively by humans. Now, with the move towards AI, and specifically an ongoing shift from generative to agentic AI, we are on the cusp of moving into an environmental condition where information technologies still predominate and those technologies are now in a position where they have a capacity to, not just transmit, but create the information that gets transmitted. It’s not that human ingenuity is about to be rendered obsolete, but the range of spaces where human production is going to be necessary, or even sufficient to the task, is (potentially) about to become far more limited.
Let me be direct in stating that the kinds of limitations imposed by AI wouldn’t seem to apply to doing the kind of psychotherapy I advocate for in this space. Artificial Intelligence is far more equipped to deliver on the kind of techniques and interventions utilized in protocols used by approaches such as CBT and DBT because of their propensity towards attempting to be universal and manualized in nature. In their quest to be broadly applicable, these kinds of interventions, often highly touted by Clinical Psychologists only to then trickle down to the tier of master’s-level clinicians, deliberately try to neutralize the relational element of the work to deliver on something that feels scientifically objective. This is a great model for the development of an AI mental health platform. It does not seem hard to think of a way of developing an AI agent that can identify cognitive distortions, name them, and then recite a script designed to prompt the user to reframe those thoughts or utilize identified coping skills.
However, there is clearly much more that happens in the context of therapy which is not easily programmable or ought not to be the purview of something that is not essentially human. Relational dynamics are explored. Concerns related to transference and counter transference surface. Matters of technique (when to speak or not to speak, how to deliver an interpretation, interventions that have little to do with what you say and everything to do with how you use your person) become fundamental ethical conundrums and their resolution is often times as much a part of the work as anything we are explicitly doing with our clients.
And yet, if we do not understand how to use and take advantage of these technologies, and to speak both emphatically and empathically to the potential downfalls of their over implementation, I do think psychotherapy as a field could run the risk of being eclipsed by an infatuation with what these technologies offer. One thing Amodei understands quite well—and, more importantly, something his company strives to create guardrails against—is that these technologies are as much a potential weapon as they are a tool for unprecedented growth. Technologies are always only as much of a “good” as are the characters of the people who wield them. It remains to be seen to what degree AI will be able to deliver on the more glossy-eyed promises of those who think it stands to revolutionize access to human services, civic resources, and generate wealth that can be broadly benefited from. But I can say with some degree of certainty that this will not be the outcome if folks who work in the domain of human services, care about social infrastructure, and advocate for equitable access to wealth are not engaging with and figuring out how to use these technologies.
There is also a practical element to this which actually has me excited about the use of the technologies for the practice of psychotherapy. As I witnessed many peers over the last several months incorporate AI into the more mundane and repetitive aspects of the work that we do (note taking, scheduling, bookkeeping and the like), I found myself being increasingly curious about the ways this technology could be used to enhance the parts of the work that I find genuinely enriching. Having played with these tools for some time, I think there is something exciting and noteworthy about what they have to offer in helping us, not just to become better administrators, but to become more skilled practitioners with thriving businesses. The first and most obvious benefit of offloading these tasks, is that it allows for more of what Thomas Ogden refers to as “reverie” in therapeutic work, opening time for reflection by outsourcing the internal resources that previously went to managing routine tasks. That or we are simply more able to feel grounded and present within and between sessions by offloading some of the cognitively draining aspects of managing a practice.
Beyond that enhancement, these models actually seem to be terrific companions in trying to better understand theory; conduct relevant research; provide supplementary supervision; track trends both within individual clients and across your practice; while through all of these parameters, helping you to better understand how it is you like to work and how to do it more effectively. I have received feedback from ChatGPT that is surprisingly aligned with feedback which I have received from my current supervisor. Not just technical stuff, or theoretical concerns, but issues related to technique and process. It has also suggested interventions and responses that do not look terribly different from ones I have regularly been offered by clinicians whom I respect.
I write about this quite a bit in this space. Psychotherapy has something to do with language and how we use it. That is not a novel idea. What is novel about the current situation is that we now have an available technology that excels at “reading” and “interpreting” texts at a pace well beyond anything that I could reasonably muster on my own. This is not the only thing that language does in therapy, but it’s not an irrelevant piece of it either.
Not everybody who does this work has access to a well-regarded psychoanalytic institute or the finances to pay for a top-tier supervisor. Being able to feed an AI platform a quick paragraph about something that occurred in a session and having it help breakdown the interactional patterns at play, make suggestions about how to think about the transference dynamics within a particular conceptual framework, or even (for the more directive-CBT minded of us) give some ideas for specific interventions that are perhaps only recently researched and published or are quite simply not the first interventions that come to mind for that specific clinician, all of these can be targeted to help make better therapy available to more people at a fraction of the cost of what it means to do formal trainings and pay for ongoing supervision.
To caveat all of what I just said, this is not an argument for AI as a substitute for any of these aspects of a relationally informed practice, any more than it is an argument for AI as a substitute for anything having to do with the real work of psychotherapy (besides perhaps figuring out how to get it to do insurance billing and follow up on my behalf). Yes, it has been interesting and helpful to play with the new technology; it has genuinely helped me think more critically about the work I do. It’s nice to know that between my biweekly supervisions, I have a resource where I can plug in a quick synopsis of a clinical encounter and get feedback that incorporates concepts from object-relations, interpersonal psychoanalysis, and existentially oriented therapies. I also wouldn’t trade in an actual relationship with a good supervisor for anything. I feel informed when I walk away from a chat where I mucked about with some theory and consider how certain theoretical constraints amplify parts of the work while diminishing others. That’s no replacement for working with someone over several years who gets to know you and can tailor supervision to your specific needs, to say nothing of the fact that I would never trust AI with being an appropriate container for more intense countertransference feelings that might arise within a therapeutic relationship. I feel inspired when my supervisor can point to my growth, think critically about my development, highlight my potential, or show me where my blind spots are in an empathic manner which acknowledges that those same blind spots were once his own. That can’t happen in any real way in a conversation with a computer.
There’s a lot to be concerned about here. For good reason. For as much as could go right with this technology, there is a lot more that could go wrong. But it is here and we will be better off if we know how to use it than if we don’t. At the end of Don DeLillo’s Underworld the character Sister Edgar, a kind of moral compass throughout the text, is subsumed by technology. A sacrificial figure who spends her days working with the disenfranchised youth of the South Bronx, her death brings with it a kind of technological transcendence. This transcendence is not romanticized by DeLillo. Published in 1997, Underworld is part of a lineage of novels, probably starting with Gravity’s Rainbow and extending well into the contemporary age, that present narratives about the post-WWII era which highlight the technological trappings of the world we were in the process of creating for ourselves. Few listened, and now we must learn how to live within the system, while simultaneously reclaiming the things that make us more than human, and therefore beyond the systems we have constructed. The problem is we may need the system to be able to outpace the system. The work of figuring out whether that will be the case begins now.