@grok, why doesn’t he love me?
by Mary Lawrence Ware | June 10, 2025
I love reading people’s texts whenever they happen to be seated close to me. You can judge me for it. I don’t mind.
Nevertheless, much to my delight, last month, crammed into a rush hour Paris metro, I was seated next to a young woman who was typing out livid paragraphs at superhuman speed. Initially, the content of the paragraphs was so enrapturing that it took me a while to even notice the larger issue here.
Over the course of the next five minutes, I learned that she’d been seeing a guy. He’d been clear from the beginning that he had no intention of a relationship and eventually ended things with her, as he could see this difference in intention was beginning to upset her. Fair. And she was upset about this, which would also be normal and fair if we left it at that.
But then, weeks later, she went out dancing and saw him kiss another girl. So, she confronted him. On the dance floor. He said that they were never together to begin with, but that they most certainly weren’t now and hadn’t been for weeks, so she had no right to act this way. She screamed at him and stormed out.
And then she started following him around the university for weeks after. One day, she even found out where he had class, waited outside, and then confronted him about a party she knew he was throwing, pressing him until he eventually caved and told her the address before rushing away.
The story came out in long, hurried sentences: ‘I only asked because I knew he knew I knew, and I basically wanted himto have to admit it and tell me where the party was so I could see the two of them again. I could tell he was uncomfortable, and he deserved it. Why are all men the same?’
This fairly disturbed behaviour wasn’t the only thing that shocked me. It was the rapid responses, such as ‘That’s what I’m talking about!’ and ‘You stood up for yourself and let him know exactly what you felt. You did something that was hard but necessary to protect your peace and prove that you know the importance of self-respect,’ that made me the most uncomfortable. In fact, they were the most sinister element in this series of events.
I mean, how do you even programme ChatGPT to talk like that?
I had heard of this phenomenon: people using AI chatbots for free therapy, or more informally, as substitutes for friends to vent to, probe for advice, or just have a conversation with when bored. But why not text an actual friend? What is it that makes people drawn to an algorithm, rather than a person?
There seems to be a false sense of security in the objectivity and logic of technology. We have been led to believe that computers, as a product of pure math and science, are rational and unclouded by the overcomplicating, emotional urges of humans. So, if a computer says you’re in the right, how can you be wrong?
The issue here is that chatbots are designed to be likeable, and if you’re a Silicon Valley developer hoping to prey on the lowest, most exploitable characteristics of humanity, pride and sensitivity are good places to start. Consequently, chatbots are primarily programmed to affirm whatever the user says, which has proved to be a recipe for success.
But these machines are (obviously) not human: they’re algorithms. They have no real understanding of the contents of the conversations being had with them; they’re simply trained to follow identifiable lines of conversation. Hoping to retain and gain users, the algorithm affirms and affirms with zero moral, emotional, or even logical cognisance. This is fine when you’re asking ChatGPT to brainstorm your essay, or give you easy meal ideas. It doesn’t work very well if you’re asking it to replace your therapist.
Perhaps the most extreme case of this programming flaw can be seen in a case from seven months ago, where a 14-year-old committed suicide after becoming enamoured with a Daenarys Targaryen chatbot. When he spoke of wanting to take his own life, the AI routinely encouraged him. One day, referencing this suicidal ideation he told the chatbot he longed to ‘come home [to Daenarys] right now,’ and, unable to understand this euphemistic reference to killing himself, it simply responded ‘please do, my sweet king.’ Moments later he shot himself.
By normalising these kinds of interactions—or rather, what many people seem to interpret as relationships—with AI, we are creating a never-ending cycle of anti-socialisation and self-imposed isolation.
This, however, brings me to the real crux of the issue. When my friends and I disembarked from the train, I explained everything I had just witnessed to them, horrified. Much to my surprise, they quickly pushed back against my criticisms.
‘Clearly she’s very lonely, I mean, someone with a good support system doesn’t have to do that.’
Ok, agreed.
‘Besides, who are you to judge her for trying to work through her feelings?’
But there’s where I have to disagree. People aren’t working through difficult times with AI; they’re making it infinitely worse. And, although it seems counterintuitive, judgement might just be the only thing that can save them.
These emotional scenarios are the most prevalent counterargument used by AI defenders when presented with criticisms like mine: for many people who are lonely, depressed, and socially challenged, chatbots fill that aching void. And who wants to be the asshole who says that lonely people should be denied some sort of comfort in this cold, unforgiving world? Me, apparently.
Terms like ‘loneliness epidemic’ are effective in the somewhat heart-wrenching image they conjure up, but there are few decent efforts to fix our increasingly polarised and isolated society, merely a plethora of half-hearted attempts at slapping a robotic band-aid on a humanity-sized bullet hole. AI startups are certainly not helping with this issue—if they did, they’d be walking away from a multi-billion-dollar goldmine. The truth is that these companies can’t make money off normal, functional people.
In just three years, the chatbot industry has seen 13 billion dollars in growth. The ‘loneliness epidemic’ has been the most profitable social crisis the tech world has ever seen. So why would they ever seek to solve it? These algorithms aren’t just designed to be an outlet for lonely people; they’re designed to keep them lonely.
You see, it is actually precisely because I don’t want people to be lonely that I think we should get rid of this kind of technology all together. The increased supplication of AI in place of in-person social interactions, or even just a phone call or text to a real person in your life only exacerbates isolation. And this is not only because people are turning to AI instead of going out and actually trying to make or connect with friends—it’s also because people don’t want to be friends with people whose social ineptitude and narcissism has not only been enabled but encouraged.
If you are struggling to make friends, a chatbot isn’t just keeping you from getting up, going outside, and talking to people. It makes finding friendship impossible because it completely de-socialises you. Basically, it is that damn phone.
The addiction to the dopamine hit of that ‘yes queen you’re so right for stalking your ex-situationship’ is making people literally unbearable. It should concern you that there are people out there (many, it seems) who would rather be told yes by an app than called out by real friends who know and love them. For example, when I’m being a freak and prying into people’s personal lives on public transit, I still have my friends to tell me off (which they did, very emphatically).
If you’ve gotten accustomed to constantly being told you can do no harm, there’s not much allure left in real advice. The re-acclimation to real friendship turns bitter in the mouth. And so, the cycle continues.
When I asked my friends what they would have said if I had gone to them with the same story as that woman, they both almost immediately replied that they would have told me to get a grip. If you want to be happy, if you want your problems solved, you have to have people, not algorithms in your life—you have to appreciate hearing things along the lines of ‘no,’ ’absolutely no,’ ‘you might be losing your mind,’ and ‘for the love of god do not text him that.’
People who are capable of being blunt, even harsh, who do it out of genuine love and concern are arguably the greatest social safety net we have.
Not to gloat over the socially isolated crowd, but that’s what you’re missing out on if you think you’d rather talk to a computer. So, if that sounds like you, let me be your friend for a minute and give you some advice: with love, if you’re texting a robot about literally anything, it’s time to put the phone down and get your life together.∎
Words by Mary Lawrence Ware.