Against My Instincts: AI as a Clinical Companion
In a recent post, I raised some broader concerns about the integration of AI into our culture and what that might mean for psychotherapy as a field. What I did not address there — and what I want to take up here — is something more specific and more practical: not whether clinicians should engage with these tools, but how.
Much of the literature out there right now is around ethical concerns about this technology or ways to deliver CBT-style psychotherapy to scale. The use of these tools for clinicians who work in more relational and less outcome-driven way, however, remains less clear. As I outlined in that last post, the head-in-the-sand approach to this technology does not seem like a viable option. Particularly when working in a culture obsessed with productivity and optimization, we do not need to make the case to ourselves that this more human way of working and incorporating these tools is valuable — we need to make it to those we would hope to see as prospective clients. One way we do that is by demonstrating a willingness to be curious about these technologies and being vocal about how we are attempting to incorporate them into our practice. With that said, I want to outline some of what I have found valuable in working with AI, and how others might adapt these approaches to their own work.
A few notes before proceeding. First, stylistically this post is going to be more direct than many of my others. I see it as largely instructive, and my writing will reflect that. Second, my interest here is not in AI as a workflow management tool. EMRs with AI note-takers are fairly ubiquitous at this point, and I do not think there is much value in my simply noting that those resources exist. What I want to think through is the more specific question: what can we tangibly do to become better and more thoughtful clinicians, and how might AI facilitate that? Third, for those interested in the question of which platform to use, I currently use Claude from Anthropic and have done some work with OpenAI's ChatGPT. There are ethical and technical reasons why I currently favor one over the other that I will not get into here, though they may be apparent to those following AI-related news. That said, despite some slight differences, most of the big players in the field are functionally the same for the purposes outlined below.
AI as Complementary Supervision
In the same way that AI should not be seen as a viable substitute for psychotherapy, I have no interest in advocating for it as a sole clinical support system. There is tremendous value in having an active human supervisory relationship with someone who knows you, understands the patterns that emerge in your work, can support your clinical development, and grasps the embodied dimensions of what we do — countertransference, burnout, the way one can use one's body in good therapeutic work, to say nothing of what we pick up simply by being in the presence of another person who has devoted themselves to this craft.
Those relationships, however, are typically biweekly or weekly at best. There is something to be said for having a readily available space to capture countertransference reactions in the immediate aftermath of a session or to revisit a resonant clinical moment from that day's work. AI can be quickly oriented to a clinician's particular style and, depending on how granular you want to get, attuned to the work of specific theorists or texts. I have used AI to take a single clinical vignette and extrapolate upon it through the lenses of Winnicott, Levenson, Bromberg, and Ghent — playing with different frames to see how each shifts my perspective on the moment in question. It is important, of course, to spend time with theory directly and cultivate the capacity to make that kind of interpretive turn independently. But AI can be a helpful companion in developing and refining that skill set. For the curious clinician, these strategies can be quite helpful for thinking through various clinical perspectives, crafting interventions aligned with a particular orientation, and considering ways in which what is being stirred in us might be an enactment of something from the client's past.
I have similarly found that over time, these platforms are adept at noticing patterns we might be overlooking: affects we regularly miss, interpersonal dynamics we are prone to reproducing, and ways in which we frame a particular kind of client that may be trending toward unearned categorization rather than attending to the idiosyncrasies of the relationship itself. There is also the possibility of taking feedback from ongoing supervision and asking AI to watch for ways in which you might be making similar errors in how you report on clients within the chat. All of this can serve to refine technique and deepen theoretical sensibility in ways that enhance — rather than replace — one's ongoing supervision and training.
Creating an Individualized Clinical Vocabulary
One interesting possibility that extends some of what I have described above is the way these tools might help us develop our own theoretical language and clinical vocabulary. It is worth recognizing that above all else, these models are rooted in how they interpret and apply language — their facility with code is, in some ways, itself a form of functional language. This remains a relatively new area of exploration for me, but I have been bringing specific blog posts to AI and asking it to track how the language that appears in those posts does or does not show up in the clinical formulations I bring to the conversation. It’s a good barometer for confirming whether there is a theoretical throughline in what I am thinking, and (if not) how to more deftly bring that language in.
Additionally, these models can be genuinely useful in arriving at more precise ways of describing what is happening in the room. It is something like having a living, disembodied thesaurus and usage dictionary rolled into one — not merely stylistically useful, but helpful in getting at the real subtleties of how we choose to communicate what we are witnessing, and in developing a richer and more consistent vocabulary for representing the fine-grained textures of human experience. We can also gain insight into what does and does not register for us in sessions, and one could conceivably use this in the service of developing more refined theoretical sensibilities and clinical language. The parallel I have in my mind is something like the writing of Christopher Bollas, who has done remarkable work in expanding and developing new concepts in terms of the ways we language the clinical encounter.
Session Preparation and Evaluation Over Time
I will be honest: I have not thought through this dimension as fully as the others. Most of my AI use has been après-coup, as it were. I have certainly taken material processed with AI, recomposed it, and incorporated it into a psychotherapy note to review before the next session. But one could just as easily systematize that process — bringing that material back to the platform prior to a session to revisit relevant clinical themes that emerged in the previous one. There is also real utility in using AI to track developments within a particularly challenging case over time. AI is not subject to the same clinical fatigue that we are. By assigning a pseudonymous label to a client and inputting observations stripped of identifying information, you can return to the same case repeatedly, adding new material and asking AI to help track how your understanding of the relationship is developing. This can be quite useful in maintaining one's hold on the dynamics of a particular relationship — and, ideally, in making us more attuned to the ways we are getting caught up in transference-countertransference enactments that might then be identified and addressed in later sessions.
Research and Literature Comprehension
This is probably where these technologies are strongest, as the organization, interpretation, and synthesis of language and data fall firmly within their core capabilities. It is also likely where clinicians oriented toward evidence-based practice will find the most immediate and where our sensibilities may most naturally overlap.
Platforms like Claude and ChatGPT excel at identifying relevant resources across a broad range of topics, contextualize what they're surfacing, and help you develop a sense of the material before you've read a word of it. You might ask for the five most essential Winnicott papers, clinical literature integrating existential approaches, or research from the last decade bringing Franz Fanon into conversation with contemporary psychoanalysis — and receive not just a list but an orienting sense of why those sources matter and how they relate to one another. Furthermore, the more frequently you use and interact with these platforms, the more tailored this information will be, as the platform gets to draw from the chat history to know a bit about your tastes, style, and general interests and subspecialities.
Another underutilized benefit—especially for those who prefer an auditory learning style: LLMs can be used in combination with AI audio tools to convert dense theoretical papers into summarized formats or podcast-style audio that extracts and highlights the most salient material. Even for readers who have no difficulty with the texts themselves, engaging with a paper this way prior to reading it in full can meaningfully improve comprehension and retention. For example, arriving at a 30-page Bion paper with some orienting framework already in place is a different experience than arriving cold.
One important caveat worth naming: these tools generate responses from training data rather than searching live databases in real time, which means specific citations should always be verified independently. They can and do occasionally misattribute ideas or produce references that don't exist. Used with that awareness, however, they are remarkably effective for conceptual orientation, literature mapping, and making sense of a body of work before sitting down with primary sources.
Closing and Continuing
What I have outlined here is, at best, a provisional map. These tools are new enough that most of us are still finding out what they can and cannot hold, and I would be skeptical of anyone — including myself — who claimed otherwise. What I feel more confident about is this: the clinicians best positioned to use these technologies wisely are the ones who bring the same rigor, self-scrutiny, and theoretical seriousness to their AI interactions as they do to the consulting room itself. Used that way, these tools can sharpen us. Used carelessly, they risk flattening the very sensibility we have spent years cultivating.
That tension deserves more than a caveat at the end of a practical post and, as such, will be taken up in my next piece on the topic, which I hope to produce soon.