I enjoyed reading this research by a Polish company called Tidio about how algorithms hallucinate, entitled “When machines dream: a dive in AI hallucinations” (pdf), which both defines and typifies the phenomenon, as well as adding some background from users.
The problem of hallucinating algorithms has been well known for a long time now, and is due solely to problems arising from statistics, similar to those that occur, on a completely different scale, to the multicollinearity that arises when working with variables with high levels of overlap. These same types of effects happen when we increase the number of variables to billions and therefore the possibility of correlations of all kinds, and mean that algorithms will return answers that can range from the completely false or even defamatory, to things that seem straight out of a bad LSD trip.
I find it disturbing that 22% of people believe that these issues are caused by governments or other entities with an agenda. Apart from an inclination toward conspiracy theories, this shows that nearly a quarter of us have no idea about basic statistics.
The only way to deal with hallucinations in generative algorithms, apart from working on improving their performance, is through a better understanding of how algorithms work. When an algorithm hallucinates it ceases to be a problem when we stop interpreting the algorithm as some kind of “higher intelligence” or “benchmark” and understand it for what it really is: a statistical machine that searches and examines correlations, and when it can’t find one, it tries to find another that looks like one. Therefore, checking answers, questioning the results and not taking anything as certain until we have been able to verify it properly is fundamental if we are to use them properly.
In other words, when we start using generative algorithms, we have to expect hallucinations: it’s worth remembering Arthur C. Clarke’s dictum that once technology reaches a certain level, it is indistinguishable from magic. That this continues to happen after a process of education in the proper use of generative algorithms is a different problem, which has more to do with the laziness of those who have not paid attention.
This is one of the reasons why it is so important to introduce the use of generative algorithms in education. We must avoid what happened with the internet, which young people ended up teaching themselves about because schools either ignored or banned it. As a result we have a generation we like to call digital natives who are actually digital orphans that no one bothered to educate in the use of a truly powerful technology. If the same thing happens with generative algorithms, we will be repeating the same mistake.
If you are a teacher and you are not already considering how to use generative algorithms, you’re doing something wrong. But of course, years of ignorance, conservatism and continued smartphone bans in schools have not made it easy for us.
(En español, aquí)
—
This post was previously published on MEDIUM.COM.
***
You Might Also Like These From The Good Men Project
Join The Good Men Project as a Premium Member today.
All Premium Members get to view The Good Men Project with NO ADS.
A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community.
A $25 annual membership gives you access to one class, one Social Interest group and our online communities.
A $12 annual membership gives you access to our Friday calls with the publisher, our online community.
Register New Account
Need more info? A complete list of benefits is here.
—
Photo credit: iStock.com