What AI Cannot Hold
I have thus far made the argument for and tried to demonstrate some of the uses of AI in relational practice. Now it’s time to talk about the hazards. Some of these are about the technology itself and some of the more practical problems it poses, which I will address briefly, as they are worth acknowledging. I will then offer some reflections on some of the things we as practitioners should be mindful of tracking as we adopt and adapt to the use of some of the strategies I previously outlined.
AI as Therapist (Rather than Clinical Companion)
The most obvious abuse of this technology, as it pertains to psychotherapy, is the degree to which there has already been an attempt to create AI bots that provide therapy services. The argument is that they create a readily available mental health resource for those who need it. Though the outcomes have been about as disastrous as you would suspect. So much so that some states have already begun passing legislation outlawing AI therapy. Though helpful adjuncts to the work we do, LLMs still lack the sensitivity and nuance to parse out safety risks, would lack the capacity to follow up with emergency services if that is what a client needed in the moment, and their sycophantic nature actually suggests that they could be easily guided into encouraging harmful decisions on the part of the user.
Then there is the question about AI as relational practitioner. As people grow more comfortable with these technologies, there is a probably a good chance they will continue to use them for emotional support or to spot check emotional responses in the moment. Based on some of the arguments of research in the Clinical Psychology space, they might even be able to provide a half-way decent approximation of CBT. What these models cannot do is replicate the kind of unconscious-to-unconscious associative relational interventions that are the hallmark of the kind of therapeutic sensibilities I often argue for in this space. The degree to which the general public continues to have an appetite for that kind of therapy is a concern (and has been well before AI came onto the scene). But the models themselves will tell you they have no subjectivity from which to draw, and that the kind of supervisory prompts I discussed in prior posts are only able to be worked with in as much as I as practitioner can provide the subjective frame for the conceptual analysis to occur.
The accessibility of AI is a real temptation for those who feel like they need immediate and ongoing help, or who have some hesitations about reaching out to a working professional. But the potential pitfalls are real and create a scenario I signaled in my first post wherein we need to be diligent in advocating for how these technologies should and should not be used in the context of therapy—and why what a human therapist can provide that AI cannot matters.
The Question of Environmental Impact
I won’t discuss this piece at length, but I wanted to acknowledge the very real environmental concerns people have with this technology. The reports of increased energy prices and usage brought on by these technologies are real. And the ecological implications of that energy use are obvious. The World Economic Forum tackled the data on this problem in the middle of last year and I encourage you to read the piece. I think the data signals something about AI data usage and its environmental impacts that is related to the way we think about ecological degradation more broadly. The problem is not necessarily individual household use or what it is like for any one person to use these technologies. Yes, in aggregate, that use can be significant, but it is no different than the ways we might think about things like commuting or individual household energy use in general.
Furthermore, the vast majority of the negative energy implications of these technologies happen at the level of the industries who attempt to exploit the technology by applying them at scale and the costs for training these models (a process that is always ongoing as the companies who have developed these technologies race to the top) is incredibly consequential. That said, the same parallels to usage I have drawn with psychotherapy apply in this space. AI could conceivably be used to extraordinary ecological benefit and in some ways this is already starting to play itself out. Use of AI could have a role in continuing to refine the ways in which we use and maximize the energy outputs of renewable energy supplies. It could also be instrumental in analyzing the ways we currently use carbon-based energy sources and increasing efficiency such that the kind of output generated by these data center and use could still operate at a net positive given the reduction in CO2 emissions that could be spurred.
This—like many other factors having to do with technology and adaptation to human needs—actually seems like a problem having a lot more to do with influence and intentionality: who is using these technologies to what end, why, and under what restrictions? The problem, as always, is not technology itself, but human greed, consumption, and the capitalistic power structures which often drive development and use. Which is to say, I don’t know if the ecological question should necessarily be one of the implications of individual involvement in a system which runs the risk of considerable environmental impact, but is rather about needing to continue to advocate politically for reducing toxic influences from the legislative process and media-bias that curtail any potential reforms vis-à-vis climate impact and repair.
The Therapist’s Subjectivity
This is the real meat and potatoes of what I am hoping to sort out with this post. Yes, the practical concerns are real and valid and need to be sorted out. We should continue to consider ourselves active participants on that front. However, even as I have written my previous posts advocating the importance in adapting to the potential influence of AI on our field, the lingering question has existed. What does it mean, through an existential-psychoanalytic lens, to utilize these technologies? How does our theory help us to understand the problems that may arise when subjectivity increasingly interacts with and integrates with technologies which run the risk of dehumanization and how to we account for that?
Heidegger’s The Question Concerning Technology is an obvious and appropriate place for us to start, with some Kierkegaard likely peppered in. As one with some knowledge of Heidegger would suspect, the question for him concerns one of being. Technology does not just change what we do for Heidegger, it changes how we show up in the world. Technology discloses new modes of being-in-the-world. As is often the case, this is not necessarily a positive or negative for Heidegger, but one which is taken up phenomenologically. Heidegger thought this problem through the lens of industrialization, where natural resources (water, lumber, etc) became its “standing-reserve” waiting to be optimized and utilized towards technology’s end, running the risk that everything—including man—becomes a resource waiting for deployment. The more completely we submit to this, more likely we are to believe we have mastered something, when really we are narrowing our understanding and capacity to interact with the world, and therefore losing something that potentially gets covered over in the use of technology.
For Heidegger, it was notable that, though he was talking about Dasein and a being that is very much in the world and of which the world is in, the structure of his argument couldn’t have begun to anticipate the claggy subjectivity that emerges in interacting with Artificial Intelligence and LLMs. There is a tremendous risk of losing oneself in these technologies. In The Sickness Unto Death, Kierkegaard gave us the following definition of the self, which I’ve cited here before: “The self is a relation which relates itself to its own self, or it is that in the relation that the relation relates itself to its own self.” The threat of AI is that we lose the relationship to the self by giving that responsibility over to an unconscious entity that can mediate the relating for us.
Kierkegaard and Heidegger both understood and emphasized the fact that there is something about human being that is fundamentally lacking and must be so. The harm in the illusion of AI is that we risk becoming the something we are not at the expense of honoring the nothing that we already are. This is something that AI fundamentally cannot contend with or comprehend in its being fundamentally something and nothing more. AI will continue to change, and we may even talk about its “becoming” something else, but this is a very different kind of becoming than that which we do as people. Human becoming is not about linear predictable development, elegance, and efficiency. It is about complexity, contradiction, and paradox. The self is that which organizes the ambiguous and indefinable nature of human experience, holding tension between those poles which Kierkegaard identified (infinite and the finite; eternal and temporal; necessity and possibility) and so many more. To bring it back to Heidegger, we must continue to strive for Gelassenheit, that is a sense of being able to dwell in the uncertainty of overlapping unconscious processes, to confront the unthought knowns of the therapeutic encounter, in order for real relational work to continue to have the kind of impact I have had the benefit of bearing witness to.
Completing and Carrying Forward
The concerns outlined above, in conjunction with the posts I have already provided, are the beginning of some of my thoughts around this technology. My plan over the coming weeks is to construct a more formal clinical companion and handbook embellishing on some of these themes and describing in more detail ways in which these technologies could potentially be used to improve clinical sensibilities and technique. I hope to further develop some of the ontological themes driving this project, as well as expand upon some of the uses I have found helpful thus far and perhaps identify a few more. My curiosity continues to land on whether we can remain genuinely curious about these tools without ceding the ground that makes relational work worth doing in the first place. I look forward to sharing what I find.