Boston Dynamics is adding ChatGPT’s new voice generation functionality to its Spot robot, the result of which an automaton that can greet people on a company visit, able to generate accents, creating hitherto unknown empathy in the world of robotics, perhaps with the exception of those “abused robot videos” that made us all take their side and see the evil human who pushed the robot, hit it with a stick, or prevented it from doing its job.
Watching Spot speak with an impeccable British accent and sporting a bowler hat and mustache may be amusing, but to be honest I am unsettled by the sight of people using ChatGPT’s voice synthesis feature to have long conversations on their smartphones with the generative assistant for everything from passing time in traffic jams to walking down the street with AirPods on, a la the movie “Her”. It’s dystopian and raises any number of ethical questions.
Of course we humans have always anthropomorphized technology. But when the advancement of this technology allows it to generate voices, as well as to carry on conversations along the lines of a personal relationship, and when this is marketed as a form of normalization by unethical companies with a history of moving fast and breaking things, then we are flirting with disaster and potentially creating psychological problems for vulnerable people.
Anyone who knows the importance teenagers place on their idols, for example, or who considers what good it might do an adult with personal problems to use an algorithm as therapy, with few constraints and the possibility of occasional hallucinations, understands that are on a slippery slope. In short, there is every likelihood that these virtual relationships will not just be prone to malfunctioning, but instead will be instrumentalized to influence the behavior of the people who maintain them.
Experiences with other people will affect us in many ways, but at least they are experiences and conversations between people with some degree of rationality. To go from there to conditioning people through generative assistants, and to do so without any apparent precautions other than a few very generalized restrictions is unacceptable. If we add to this a high level of accessibility and an element of standardization, we will soon see celebrity avatars having daily conversations with millions of people, introducing into their conversations the personal elements mentioned in previous conversations, extracting personal data of all kinds and selling it to advertisers, or inducing moods in their human interlocutors to encourage buying stuff or to vote in a certain way.
Our societies are not ready to assimilate a technology like generative AI in the context of personal relationships, simply because we have not gone through a stage of education that allows large numbers of people to really understand what, rather than who, they are talking to. Many people grant a certain authority to algorithms, attributing unlimited capacity for consultation and synthesis of information, outsourcing their critical thinking to the answers they access through technology. Ignorance of how a particular technology works, as Arthur C. Clarke rightly observed, makes it indistinguishable from magic. This can have enormously harmful effects on human societies: from distorted perceptions of reality to alienation and psychological problems.
I have always found it strange to meet someone I see very often in the media, a certain feeling of “I have been with this person in my living room”, which has often led me to assume a familiarity beyond what I should have with someone I am meeting for the first time. Assimilating completely asymmetrical relationships like the one you (don’t) have with someone who, for example, reads the news every day, is in itself a task that requires a certain maturity, education, and judgment. What will happen when people who have been talking every day to a generative algorithm that very convincingly pretends to be their idol finally have the opportunity to really know that person or internalize those conversations as real? And when someone attributes a certain personality to something that is nothing more than a generative algorithm recombining information from the network? And all this in an uncertain regulatory context, in total ignorance of how such tools work or how they should be managed, not to mention the lack of experience with psychological disorders?
The problem here is not the pace technology is developing at, but the products that some irresponsible people put on the market without taking adequate precautions. We need clear and precise regulations about this technology, rather than an ad hoc approach to limiting the activities of the idiots who seek to make a profit from products and services based on it. The “move fast and break things” that already generated so many problems, now taken to the next level.
I’m not frightened by technology or prone to blaming it for society’s ills. Regular readers will know that I’m generally a techno-optimist. And yet, this topic genuinely worries me, and I think it’s going to lead to a lot of regrets.
(En español, aquí)
—
This post was previously published on MEDIUM.COM.
***
You may also like these posts on The Good Men Project:
White Fragility: Talking to White People About Racism | Escape the “Act Like a Man” Box | The Lack of Gentle Platonic Touch in Men’s Lives is a Killer | What We Talk About When We Talk About Men |
Join The Good Men Project as a Premium Member today.
All Premium Members get to view The Good Men Project with NO ADS.
A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community.
A $25 annual membership gives you access to one class, one Social Interest group and our online communities.
A $12 annual membership gives you access to our Friday calls with the publisher, our online community.
Register New Account
Need more info? A complete list of benefits is here.
—
Photo credit: iStock.com