The news of the day is that Joe Biden has finally decided to throw his hat in the regulatory ring by issuing the first executive order to set the standards for the safety and security of artificial intelligence, obliging companies that develop products and services based on this technology to develop standards and share the results of their security tests with the US government.
For someone who has spent his entire professional life defining his research interest as “the effects of technology on people, companies and society”, it’s both unfortunate and dangerous that we do not seem to have learned anything about previous regulatory efforts, namely the failure to impose any order on social media, and that we are apparently determined to repeat the same mistakes.
Let’s be clear: the standards and security testing of a technology can never ever be left in the hands of those who stand to make money from it. Never. Otherwise, be prepared to pay the consequences. The potential effects of a technology such as artificial intelligence cover absolutely all areas of life in society, not the least of which are privacy, discrimination, civil rights, and labor markets, which does not mean that we should stop researching or developing this technology — impossible anyway — but that its incorporation into all aspects of that society requires a very high level of supervision, which by its very definition must always be external, particularly in the case of companies that put profit above any other consideration.
In short, we’ve been here before: social networks have always been about making money from the people that use them. Why? Because the service was “free”, but paid for through targeted advertising.
This seems very natural to us today, but at the time, it should have raised all kinds of concerns, because the difference between that technology and the previous one was because it was based on the capture of user data, as opposed to traditional advertising on television, newspapers, billboards. Thus, regulation should have been based on what could or could not be done with that information: what was permissible to capture and store and what was not, how it could be used, to whom it could be resold or what effects it could have on the fundamental rights of the individual.
As this sandboxing was never done, we have seen electoral manipulation, the sale of personal data of all kinds, including health, political affinities or religious beliefs, as well as bombarding them with advertising. It’s clear that regulation has failed when a company like Meta, with a past that exposes it has having no ethics, dares to define its vision as “we believe in an Internet based on advertising, which gives people access to personalized products and services, regardless of their economic situation”, and this goes unchallenged by the authorities (those people it cites are not users, but raw material that is sold to the highest bidder), and plans to offer a paid, non-advertising service so it can claim user consent and carry on as before.
Regulation is not about forcing Meta to offer a paid advertising-free service, but about explaining to it that its product, as it is conceived, infringes the most basic human rights, and therefore is unacceptable. If it violates fundamental rights, it doesn’t matter if people consent. I cannot consent to be killed, or harassed, or anything that infringes my fundamental rights. If Meta wants to advertise only by time, content, geography and a few other things that don’t violate user privacy, fine. If not, if it insists on providing its real customers — advertisers — with complete and exhaustive information about its users, it should be called out as illegal and excluded from the market.
And now it looks like we’re going to repeat the same mistakes with artificial intelligence. For a regulator to invite themarket to regulate itself is unacceptable, especially if there are precedents with previous technologies. To assume any kind of “good faith” from those who have never had it is crazy. Instead, what needs to be done is to outline what we already know about AI and the risks involved in its mass use, and tell any company that intends to provide a product based on a generative algorithm that if they repeat those mistakes the wrath of hell will be unleashed on them, rather than some trivial fine.
We are allowing ourselves to be blinded by arguments along the lines that we still don’t know if AI will prove a danger, rather than talking about the negative effects that already exist, that are already here and that we already know about. And that’s irresponsible and can’t bring anything good. In the end, it’s about repeating Uncle Ben’s wise words in Spider-Man: with great power comes great responsibility. If companies using this technology’s awesome power do not use it responsibly, they need to know they will be severely and exemplarily punished. Companies run by greedy, out of control individuals that have caused untold harm have no place providing AI services, and those individuals need to know that they will be going to jail if they repeat the crimes they have committed when running social networks. Under no circumstances can the market be left to regulate itself.
We’ll see what comes out of Biden’s executive order. But so far, it doesn’t look like it’s going to be anything good.
(En español, aquí)
—
This post was previously published on MEDIUM.COM.
***
You Might Also Like These From The Good Men Project
Compliments Men Want to Hear More Often | Relationships Aren’t Easy, But They’re Worth It | The One Thing Men Want More Than Sex | ..A Man’s Kiss Tells You Everything |
Join The Good Men Project as a Premium Member today.
All Premium Members get to view The Good Men Project with NO ADS.
A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community.
A $25 annual membership gives you access to one class, one Social Interest group and our online communities.
A $12 annual membership gives you access to our Friday calls with the publisher, our online community.
Register New Account
Need more info? A complete list of benefits is here.
—
Photo credit: iStock.com