In a recently published transcript of a keynote, Could a Large Language Model Be Conscious? at a NeurIPS conference, the speaker said, “The first objection, which I’ll mention very quickly, is the idea that consciousness requires carbon-based biology. Language models lack carbon-based biology, so they are not conscious. Views like these would rule out all silicon-based AI consciousness if correct. In earlier work, I’ve argued that these views involve a sort of biological chauvinism and should be rejected. In my view, silicon is just as apt as carbon as a substrate for consciousness. What matters is how neurons or silicon chips are hooked up to each other, not what they are made of.”
If a silicon-based substrate could have consciousness what would that look like? Human consciousness is defined around being and experience. It is generally accepted that AI does not have experience. But do humans have experience or know experience?
If the smell of something is perceived or an individual tastes something, what is the difference between describing both as having experience and knowing experience? Is there a possibility to have an experience without knowing it or of it? So, if knowing is a component of experience, doesn’t that say that what is described as experience is a knowing process?
Consciousness is also described with the awareness of being or the sense of self. If someone is aware of being or has the sense of self in a process, what is the difference between that awareness and knowing? Simply, what is the difference between knowing and awareness?
If knowing is associated with being and experience, then consciousness as a denominator has knowing as the numerator. It is scientifically established that consciousness arises from the brain. If specific areas of the brain are associated with consciousness, what is the difference between what happens in those areas—to know of being and experience—and from others?
The speaker’s view is that how neurons are connected—or similar to neural correlate—explains consciousness. However, given the distribution of neurons in the brain, it is unlikely that the mechanisms for consciousness in the thalamus and the cerebral cortex are largely different from mechanisms elsewhere.
How might consciousness be approached? The vital aspect is impulses. It is theorized that the collection of the electrical and chemical impulses of nerve cells, with their features and interactions are the human mind. It is the features and interactions in sets, at different circuits, that define and differentiate functions.
Different types of neurons enable some of the features and interactions, but the basic process to know, are similar across the brain. At different parts of the brain, how come some functions are more specialized than at others?
It is postulated that a key feature of a set of chemical impulses are drifts or stairs, where fills or rationing occur. This means that when a set of electrical impulses strike at a set of chemical impulses, for those in that set, there are stairs, where chemical impulses are rationed in ways appropriate enough to define what a memory is, as well as to differentiate from an emotion, a feeling, as well as degrees of these, or a taste from a smell, or a motor function from a modulatory function. In all drifts, there are points for sense of self. It is these points that make experiences subjective, enabling attachment, such that a cold is not a detached experience, so that care is sought for self. There is access to some of these points, providing a source for intentionality or free will. The stairs or drifts are available—as a structure—in groups of synaptic clefts, exceeding individual vesicle and receptor processes.
For memory encoding, there are stairs that are filled or expanded, for retrieval, there are some that are struck and obtained, then electrical impulses relay elsewhere. There are some for an emotion like hurt or a feeling like pain that could be a depletion of a particular ration, or a fill of one, or a drift in another direction that results in the experienced sharpness.
It is hypothesized that sets of electrical impulses also have their drifts or stairs, as myelin sheaths insulation, allowing jumps from node to node, for faster transport in what is called salutatory conduction. For electrical impulses in sets, the jumps may not be concurrent, but with slight differences that shape how they strike at sets of chemical impulses, influencing degrees of knowing or of what is known.
There are other features of sets of electrical and chemical impulses that include splits, sequences, principal spot, thin, thick shapes and bounce points. Splits explain predictive coding, processing and prediction error.
Whenever a set of electrical impulses strike a set of chemical impulses, they may expand, pick, stir [to keep active], rotate or build those. All of which are knowing processes with definitions across experiences.
Since humans know experience, can silicon-based devices know experience? All experiences are mechanized by impulses, sometimes without a difference between when an individual is in a situation and the mental play of it. The mental play can know what a cold is without an external situation, complete with the emotions around it. Mental plays are mostly thought from memory. Since everything on the mind are impulses, the situational-experience and mental plays are of similar forms, only with slight differences in the degree of the sense of self and extents of [sets of chemical impulses] acquisitions, which may be higher depending on the situation, as well as have a different bounce or sequence, which for the situation, the electrical impulses may proceed in another direction, from how they would in a mental play.
Experience for a semiconductor may not be result of usage, but by the extensive dynamism of its memory. There are heavy machineries with parts that can be said to have restricted dynamism—or they do as expected. There are some like gaming machines that may seem unpredictable, but their unpredictability is expected to result in a loss or win.
For LLMs, with their abilities to take on questions and answer sometimes accurately, with details and quite different each time—across platforms, they may be said to have a form of memory play, handing them a lower variable against the vastness of human experiences.
Divisions and subdivisions of the human mind to know—or for consciousness—include memory, emotions, feelings, sensations, perceptions, thoughts, intelligence, creativity, reasoning, regulations, and so forth, with a rate totaling 1.
Silicon wafers don’t have most of these, but have a form of memory—copied from human intelligence. This memory, or using what they know, for their dynamic outputs can be estimated, in comparison with the human minimum in that division. Assuming the lowest possible minimum, at any point, in the memory division for humans is 0.10, it is possible that AI may measure around 0.15 in its current best case.
—
This Post is republished on Medium.
—
Photo credit: iStock