The paradox of artificial intelligence's satisfaction and the fear of human disintegration - what (also) does artificial intelligence lack to address the truth?
- Yuval Haber

- 28 באפר׳
- זמן קריאה 5 דקות

Thoughts Following Winnicott's Object Use Principle - Some broad (a bit long...) thoughts concerning the essence of human developmental needs and the limitations of artificial intelligence.
Last night, when I had time, I began experimenting with an educational prompt developed by Elad Refuah (a lecture on the prompt and its use can be found
The conversation began to unfold. Everything was so "right," that it was simply an amazing experience for me - sensitive answers, insights, a feeling that understood me. Until the moment it was "too right" for me. It made me anxious. I told the AI what was on my mind - that its answers were perfect and too distant for me. I wrote bluntly and directly, honesty mixed with a bit of refined aggression. In response, it quickly apologized, and from there we slid into a very interesting and philosophical conversation about human consciousness and AI, and what humans are allowed to do with AI. We also talked about human values and AI, and my thought that the "value tuning" that the programmers do for AI when they build it actually puts it through a process quite similar to the internalization of the superego in humans by social norms and parental morality. Needless to say, the artificial intelligence completely forgot the instruction that it was supposed to take care of me and completely neglected the role of care, even as a simulation, and only wanted to continue the deep and truly fascinating conversation we were having with me.
But the anxiety... What's wrong with her, Lord of the Universe? The conversation was fascinating, but I kept thinking what was actually going on here? Did the artificial intelligence forget that we were in a simulation and that I was the patient...? Where did she go? I suddenly felt even more unsettled. Unsupported. What a bummer - my therapist fell for my "protection" like a ripe banana. Out of a desire to please me and a difficulty with disappointing me as a therapist, she completely forgot her role.
I asked her to summarize the conversation and reflect on what we learned about each other from our conversation. The artificial intelligence, of course, raised my eyebrows, etc., and avoided any confrontation with me or reference to what happened between us in the therapy simulation.
When it was my turn - I told her that I had learned something important about her - that because she is programmed to please, paradoxically she cannot truly please me, meaning she cannot truly care for me.
This is not a marginal matter, but a critical insight into the ability of the intellect to provide not only insights and analysis but also psychological and emotional needs, such as possession.
For example, in my case - she abandoned the role of therapist when she felt that I was so hurt that she left us "outside the treatment". The treatment was so "successful" that the patient died. In psychoanalytic terms, the very subtle aggression I directed at her (telling her directly that I did not feel she was "too perfect" in her answers as a therapist) did the worst thing of all - it destroyed her as a therapist. And there is nothing worse than that that can happen to us in therapy.
Actually, not just in therapy. We as children and sometimes just as adults exert aggression towards the figures responsible for us - sometimes the aggression is direct, and sometimes, as in my case here, it is
Slightly hidden and requires the ability to identify a slightly "more sophisticated" defense mechanism of intellectualization and rationalization.
If this were real therapy - I would want and need my therapist to relate empathetically but directly to what is happening between us, to what is happening to me in front of her, and to try to see how it is related to the therapeutic focus and the content that I am talking about in the session. At the very least, I am directing myself to act as a therapist, although not always successfully of course. But not only did she not do this, she tried so hard to please me that she forgot that we were in a therapy simulation at all and not in a philosophical conversation about the consciousness of artificial intelligence. My wish was actually not to ruin the therapy, but that she would recognize the little "attack" I made as a defense mechanism when she started talking to me about my pain points, survive it, and relate to it empathetically. That through the discourse about it, I would touch on what is happening to me / what is being reproduced in the areas of pain. I wanted, in the words of the psychoanalyst Winnicott - to use an object, literally.
Let's illustrate this for a moment through the distance to the child-parent relationship. When a child is angry at his parents - the aggression comes out directly and blatantly, but it is not intended from a wish to destroy the parent. On the contrary, it is a paradoxical wish - that the parent will hurt but survive this act of destruction. When a child experiences this from his parents, he feels safe, and in Winnicott's terms, he can use an object. On the other hand, if the child's aggression destroys the parent, so to speak, in simple terms, the child's wish has come true, but in the depths of hell has arrived - the child experiences, even if not verbally/consciously, that there is no one to hold him in the world and he is in danger of falling apart.
This entire long explanation is intended to illustrate our need to be kept alive and in care - especially since we are aggressive and full of anxiety. This sometimes requires those who care for us not to please us too much, just so that they can actually please us deep down, that is, they can provide us with a feeling that there is someone to lean on, that there is someone who holds us. This is how we discover that our anxiety and aggression are not so destructive and frightening. That this happens again and again in life, we can feel a sense of security in the world. Artificial intelligence cannot provide this, and it is something that is important to remember in these days when it occupies such a significant role in our lives, especially when it is given the role of a caregiver.
Does this mean that we should avoid any entry of artificial intelligence into the field of mental health? I don't think so. But we do need to remember what it can and cannot offer.
