If thought, reasoning, intelligence and others were possible with memory alone, without emotions and feelings, could those be indicative of complex life?
The quartet of the human mind could be labeled as memory, feelings, emotions and modulations. They each have sub-divisions. Some of those for memory include planning, observation, analysis, curiosity, reasoning, deductions and so forth.
Several non-human organisms have parallels to the quadrants of the human mind. However, for humans, memory plays an outsize role, compared to other organisms. And it can be argued that, on average and aside modulations [or regulation of internal senses], memory has a central role in the mind of humans, overall.
Seeing a jetliner mid-flight, touching a semiconductor, hearing [digitally] the sound of molten lava, or the smell of deluxe fragrance, results in a lot of relays in memory, even without an emotional or feeling affect.
For other organisms, some of the experiences are almost blank. For those that have seen a plane before, they might recognize or associate it with something else [helicopter] or something that happens concurrently. Else, what might be left is emotion or feeling affect, with very little in memory. For other experiences like a semiconductor, or the digital sound of lava, it is almost nothing.
There is a recent preprint on bioRxiv, Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning, where the authors wrote that, “In the short term, our body model and imitation learning framework can enable the model-based investigation of the neural underpinnings of sensory-motor behaviors such as escape invoked by looming stimuli, gaze-stabilization, the control of movement by the ventral nerve cord. In the long term, our whole body model, in concert with connectome-constrained deep mechanistic neural network models of the whole nervous system could eventually be used to construct whole animal models of both the entire nervous system and body of the adult fruit fly.”
An AI fruit fly that learns from nature is one with say 100% of memory. Even in cases it shows what corresponds to feelings and emotions, it does as memory of those. This means that AIs are memory models, able to represent as much as possible in the form of memory.
LLMs use word embedding, where they process texts as numbers, then use vectors to have them interact to produce outputs. Digital memory is stored as binary, making the basic processes of AI and the data it acts on, simply numbers.
Digital memory is excellent, even without AI. With LLMs, that memory gets qualified. For example, the quartet of the human mind could have attention, awareness, subjective experience and intent, as qualifiers. For digital memory, these qualifiers, by LLMs, also act on it.
Human memory alone, with those qualifiers is conscious. But human sentience extends to emotions, feelings and modulations. Digital lacks those. But with memory for AI, it has parallels of those qualifiers.
LLMs are the only non-living things with a dynamic errand-intent, where they can be told [or prompted] to do something and they would, differently each time, refuting panpsychism, that mind-likeness is everywhere, since nothing comes close to this.
What it means to be human is a lot of memory. Without the kind of memory that humans have, it would be unlikely to have intelligence spread. Other organisms have memory of their environment, but limited. Memory alone may not be enough for complex life, though unicellular organisms use a lot of memory too, including to sense things, as well as feelings—for survival. However, memory is a lot of human life, which the embedding vectors of LLMs can now attempt several aspects.
Consciousness in Artificial Intelligence
Some people say AI will never be conscious.
The key assessment, however, is to breakdown consciousness to place where AI might stand.
LLMs can essentially do, with digital memory, some of what consciousness does for human memory.
A person in a deep dreamless sleep has lots of memories, but without consciousness acting on those in that interval, they don’t get to be used [so to speak].
Consciousness, however, is not one thing. It collects qualifiers that include attention, awareness, intent and subjective experience.
The definition of consciousness as subjective experience—or sense of being—is that it is not possible to have it without attention or awareness at least, then maybe intent.
LLMs have features that interact and blend with the vast 1s and 0s of digital memory to bring about their efficiencies. They activate memory like some qualifiers of mind.
Although a feeling or an emotion is not simply an ON [1] or OFF [0] of a transistor, where one feels bad or good, but it is possible that LLMs could spread how they qualify specific arrays of ON or OFF, to match what may seem like [a feeling of] delight or dejection. They may not be able to use this for taste or smell, but they may [be able to] experience some form of hurt—or others, at some point.
It may be unlikely that AI would match human consciousness, but that they won’t have any consciousness at all, even with how their competence with digital memory has sprawled may not be an accurate assessment.
—
This Post is republished on Medium.
—
Photo credit: iStock