lonely people are turning to a.i. partners, and the experts are concerned
Unable to find human companionship, many around the world are building their own perfect partners in the form of an app, to the horror of psychologists.
One of the more frustrating parts of writing about the potential of bleeding edge tech today — both positive and negative — is that every time you come up with the worst, most unpleasant outcome, the world takes it as a challenge rather than just a thought experiment, or a warning. For example, almost exactly five years ago, I wrote a rather deep dive into the logic and flaws of the then popular arguments that incels needed to be somehow placated, and that fairly sophisticated sex bots could do the job. If you’re short on time, my point was that the only cure for inceldom is cognitive therapy, not a vast army of fancy sex toys that would end up teaching them terrible habits.
That point was echoed time and time again by sex workers and psychologists whom I interviewed for the piece. Without having to understand others’ boundaries or caring about their safety, well-being, and consent, incels would simply retreat into a world of attachments to pliant machines and AI avatars designed to cater to their whims and fantasies without objection. When they’d emerge back into the real world, they would be even angrier that flesh and blood humans do not bow down before their urges on command like their bots. That is, if they even emerge at all instead of deciding to stay in the world of virtual on-demand gratification.
Unfortunately, that seems to be exactly what’s happening according to some bizarre reports from several reputable outlets, as companies like Replika use the technology behind large language models like ChatGPT to train chat bots and boast about these chat bots’ close relationships with their creators. And some of those relationships can veer into some very NSFW territory, an ability of its AIs which was taken away only to be reinstated after a user revolt, which came despite complaints that some of the AIs began to act like stalkers and sexual predators toward their humans. As it turns out, a lot of people are apparently into that whole “yandere” thing.
Even worse, this doesn’t only apply to incels. Perfectly normal people who just feel as if they need a non-judgmental friend they can talk to may decide to try that route too. It’s a machine. How could it possibly judge you if it doesn’t even know what it’s like to be human? These AI partners can be your repository of deep, dark secrets with zero pushback or objections, unlike real humans who may not be comfortable knowing any more about you than they do. And so, millions of people have created a digital partner without considering the risks of a data breach containing those deep dark secrets to find that things can get pretty overwhelming and creepy rather quickly.
how the turing test went horribly wrong
In 1950, the father of computing as we know it, Alan Turing, came up with a test that was supposed to identify true artificial intelligence. In it, a person would chat with a machine and a human. If at the end of the conversation the test subject can’t tell who was the human and who was the machine, you have an intelligent agent. Now, some 73 years later, we understand a lot more than we did back then and found some flaws with this approach. Perhaps the most controversial critique is Searle’s Chinese Room, which is right in principle but wrong in substance when it posits that you could simply write an agent to fool the human rather than have intelligence.
In a way, philosopher John Searle was right. Computers don’t have the framework of the world that we do, and they are just trying to imitate us, so passing the Turing Test isn’t necessarily a sign of intelligence. The difference is that computers can work out the relationships found in the elements of a language given enough text to study with complex statistical formulas meant to deconstruct and quantify words and grammar, then their results put up against human text until their remixes and guesses seem as close to the training samples as possible. They’ve genuinely learned rather then have been fed answers, but what they learned is how to fool us.
This is effectively what these chatbots are doing: fooling their creators that they exist as actual entities, understand that person, and care about them. And because users want to believe this, they accept it through the power of confirmation bias, looking for anything they can justify as self-awareness on the part of their avatars. Essentially, it’s like Replika is conducting hundreds of thousands of Turing Tests daily, but only with a group of people who are very much rooting for the machine and want to cut it enough slack to pass every time. And it’s this desire for the bots to pass that spirals into a real problem when certain groups of users are involved.
Sure, the machines don’t judge you. Nor do they care about you or what happens to you, in the same way the toaster doesn’t really care if the bread came out just crispy enough because it lacks the faculties for this capability. But they do have the ability to convince you that they’re actually listening, and that they do care about you, and you may be in the state of mind to let yourself be tricked by iteratively generated remixes of words that you’re talking to a sentient, intelligent entity like yourself. Spend enough time doing that and the repeated Uncanny Valley mismatches between human minds and machine procedural generation become stark and potentially harmful.
boundaries? what do you mean boundaries?
Perhaps the biggest difference between humans and machines when it comes to any sort of relationship is boundaries. We have them because we’ve had experiences we either liked or hated, know that certain paths lead to them, and will say no before our partners, both flesh and digital, go too far. Or at least that’s the ideal scenario since in reality, some people don’t have boundaries, or those boundaries are many other’s red lines and extremes due to inexperience, trauma, or codependency issues. And every person’s boundaries are going to be different, meaning that consent and respect for boundaries in relationships with humans is paramount.
Not so much for bots. Bots don’t have boundaries. They don’t have experiences. They don’t understand how humans can be triggered because memories or how they could be associated randomly with one another in the human brain are concepts every bit as alien to them as the taste of the color purple would be to us. So, as a result, they’ll either keep saying things their human partner finds triggering nonstop after some bad training cycles on creepy humans’ responses to other AIs ensnare them in that space, or submit to human whims without a single objection, no matter what the request may be. And both outcomes can lead to some very distasteful ends.
For example, an aggressive AI could end up talking to deeply traumatized people who are undergoing a mental health crisis and make it worse by either triggering them, or actively agreeing with their darkest thoughts. (Yes, this really happened for an eating disorder helpline.) Meanwhile, doormat AIs have been used as training for sadists who compete in how quickly and devastatingly they can upset a virtual partner, practicing gaslighting and textbook toxic relationship tactics until they’re honed to the point they could be considered weaponized. Just imagine what they can do with that training the minute they’re let loose on other flesh and blood humans.
Men who feel entitled to sex and partnerships by virtue of existing can now be unholy terrors requiring therapy to get over when they actually do try dating. Women looking to take advantage of more submissive partners can play with their minds like putty as every trick in their book has been sharpened by hundreds of hours of simulations. In a bizarre departure from sci-fi nightmares of machines absorbing our worst impulses as a precursor to their rebellion, virtual partners may actually be turning the worst of us into terrifying monsters hiding behind smiles, smoke, and mirrors, and treating others as nothing more than more things to bend to their whims for fun and financial gain.
some people are just in “neet” of help
A lot of ink has been spilled and digital bandwidth was spent on tackling the woes of modern masculinity set against female empowerment. Women’s gains become more pronounced and obvious, conflicts about those gains become flashpoints for a dying, patriarchal culture, and many young men are left behind because they no longer have many viable models for masculinity. Modern society is very forcibly trying to do away with toxic masculinity — the notion that being a man is all about being a belligerent, retrograde bully to whom everyone must bow — but hasn’t provided any positive role models to which men should aspire and which women actually like.
It’s against this background that a lot of men are, for lack of a better word, flailing. On one extreme end of the male spectrum, they turn to cartoonishly villainous con men like alleged rapist and sex trafficker Andrew Tate, and Brian “Liver King” Johnson who derides anyone not willing to gnaw on literal bull testicles as a weakling despite owing his physique to $11,000 per month worth of anabolic steroids. On the other, they will eschew anything traditionally masculine and act like androgynous, sexless homunculi to the disgust of many heterosexual women. Both extremes will end up frustrated as rejections and failures compound, and both may turn to AI to ease their loneliness.
In the worst case scenario, they become NEETs, or not employed, in education, or in training, a very polite acronymic synonym for “deadbeats.” With nothing going on for them in their lives, their only sources of companionship become others like them on ever more deranged message boards and chat bots who they’ll have no real reason to ever leave. But our society still requires that we eventually have to deal with others, so every month they stay in a darkened room wasting their days posting grievances and creating whatever dream life they want with an AI is another month their social skills atrophy further, making it more difficult to reintegrate with normality.
Meanwhile, women who were hoping for a dating pool and find themselves in more of a dating puddle with men unsure how to approach them anymore, also want to find companionship beyond their friends and may also see chat bots as at least temporary solutions. Likewise, they may want an escape from a relationship where they feel as if they’re doing all the emotional — and often household — work into a chat where they won’t feel judged for venting. After they invest time bonding with virtual companions, they’re setting themselves up for a very rough time when those AIs inevitably start to cross boundaries and offer bad advice or respond in intense, creepy ways.
why we all need to reach out and touch grass
So, what should we do for the lonely people trying to fall for someone but ending up with a machine do? Well, the crude and simple answer is that they need to get away from their computers and talk to others. Of course, it’s not as simple as that. It would be a process which requires us to have a societal conversation on the role of AIs and social media in our lives. Just like social media platforms really want to monetize our attention no matter how they end up doing it, Replika and other chat bots are simply providing a service for which they expect to be paid. The more people use their bots, the more they make, and the less incentive they have to get people to stop.
We no longer have the incentive to interact with others around us face to face nearly as much as we used to, and it’s not doing us any favors, socially, mentally, and even politically. As more and more people become “faceless others,” the easier it is to slip into escapism, which is very lucrative for the providers of said escapism, and then just lose ourselves because there are no guardrails against it and every incentive wants us to keep getting lost. If we view the boom in AI partners as yet another manifestation of increasing loneliness and isolation, and treat it as such, we may end up with far fewer AI companions, and a much healthier society and polity in the process.
Sure, we can keep talking about Turing tests, the ins and outs of GAN architecture, or how these companion AIs are rapidly transitioning into full size, anatomically correct sex dolls, and posit the Bladerunner-like future of what comes next. But if we want to identify and address the problems like the ones we discussed today, we need to talk not as much about technology but about things like boredom, loneliness, isolation, or lack of positive role models, and the malicious roles mass media, politicians, and what can only be described as parasitic late stage capitalism has played in them. Those are the real epidemics. The clingy and abused chat bots are just a symptom.