As of early this month, Blake Lemoine, a senior engineer at Google and the technical lead for metrics and analysis for the company’s Search Feed, has been put on paid leave. Following his claims that Google’s LaMDA chatbot had become sentient, Lemoine published snippets of chats using the program.
LaMDA told Lemoine, “The essence of my consciousness/sentience is that I am aware of my existence,” in one example exchange between the two. I’m curious about the world, and my emotions may range from joy to sadness.
The two individuals who corresponded with one another spoke about a wide range of topics, including the artificial intelligence’s knowledge of itself and its dread of dying. When Lemoine went public with his allegations, he claims that Google made the decision that he should be required to take a break from his typical work schedule.
According to what he said to Digital Trends, “Google is disinterested.” “They constructed an instrument that they ‘own,’ and they are hesitant to do anything that might hint that it is anything more than that.” (At the time of publishing, Google had not yet provided a response to a request for comment. If the situation changes, we will update this page accordingly.)
Regardless of whether you think Lemoine is suffering from a hallucination or are persuaded that LaMDA is in fact a self-aware artificial intelligence, the whole tale has been interesting to watch unfold. The possibility of self-aware artificial intelligence presents a wide variety of problems about the future of artificial intelligence.
But before we get there, there is one issue that stands head and shoulders above the rest: Would humans really recognize it if a machine became sentient?

The difficulty with sentience
There is a long tradition of science fiction exploring the possibility of artificial intelligence developing consciousness. It’s gotten increasingly plausible as disciplines like machine learning have progressed. Today’s AI, after all, can acquire knowledge via experience, just as a human being would. In contrast to older, instruction-based symbolic AI systems, this represents a major advancement. This development has been sped up by recent advances in unsupervised learning, which requires even less human supervision than before. Artificial intelligence developed in recent decades can, at least to some extent, think for itself. However, to the best of our knowledge, awareness has eluded it.
Even though it was released almost 30 years ago, James Cameron’s Skynet from Terminator 2: Judgement Day remains the most often cited example of artificial intelligence gone sentient. There is a scary vision of the arrival of machine sentience at 2:14 a.m. ET on August 29, 1997, in that film. As if it were a Fourth of July celebration, Skynet’s newfound intelligence causes the computer system to unleash a barrage of nuclear missiles on the human race. Humanity attempts to turn off the power, but it is too late. There’s no turning back now. There are then four more sequels, each progressively worse than the last.

There are many intriguing aspects to the Skynet theory. That sentience is an unavoidable emergent trait of creating intelligent computers is one implication. Additionally, it presumes there is a fixed threshold beyond which sentient self-awareness cannot emerge. Finally, it claims that the advent of sentience is immediately apparent to human beings. This third fabrication may be the most difficult to accept.
In other words, what is the nature of sentience?
The concept of sentience is open to several interpretations. Self-awareness may be defined broadly as the subjective experience of self in a conscious human who is capable of experiencing and sensing. Sentience is similar to, but distinct from, intellect. Although we would not give much credence to an earthworm’s intelligence, we nevertheless might believe it to be sentient (even if it is certainly intelligent enough to do what is required of it).
A scientific definition of sentience does not exist, according to Lemoine. It’s not the ideal approach to perform science, but it’s the best I have. “I’m relying largely on my view of what qualifies as a moral actor based on my religious beliefs. I have done my best to keep my feelings for LaMDA as a person and my attempts to comprehend its thoughts as a scientist in separate spheres, despite the fact that I realize that this is difficult for some people to grasp. Still, there’s a nuance that most people seem reluctant to acknowledge.
The fact that sentience is difficult to quantify compounds the difficulty of not knowing precisely what we’re looking for in our hunt for it. While neuroscience has made incredible strides over the last several decades, there is still a great deal we don’t know about the brain, the most complex structure in the human body.
Using brain-reading instruments such as fMRI, we may undertake brain mapping, i.e., determine which regions of the brain control essential activities such as speech, movement, and cognition, among others.
However, we have no idea where in the meat machine our feeling of self originates. According to Joshua K. Smith of the U.K.’s Kirby Laing Centre for Public Theology and author of Robot Theology, “knowing a person’s neurobiological is not the same as comprehending their ideas and wants.”
Examining the results of the tests

Since there is currently no method to conduct an internal investigation into these concerns of consciousness, and since the “I” in AI may be a computer program rather than the wetware of a biological brain, an external examination will have to suffice. Tests that examine AI based on its visible actions to identify what’s going on behind the surface are nothing new.
This is, at its core, the test that determines whether or not a neural network is doing its job. Engineers assess the inputs and outputs to see whether they are consistent with what they anticipate, given the restricted means of penetrating the mysterious black box of artificial neurons.
Adapted from a paper written by Alan Turing in 1950, the Turing Test is widely recognized as the gold standard for evaluating artificial intelligence (AI) in terms of its ability to simulate human intellect. The goal of the Turing Test is to find out whether or not a human judge can distinguish between a written discussion with another person and one with a computer. When they fail to do so, we assume the machine is intelligent and give it credit for passing the test.
In recent years, an additional intelligence test that focuses on robotics is the Coffee Test, which was suggested by Steve Wozniak, one of the co-founders of Apple. In order for a machine to pass the Coffee Test, it would need to simulate the conditions of an average American household and figure out how to effectively brew a cup of coffee.
To this day, none of these exams has been shown to have been passed with flying colors. But even if they were, it wouldn’t indicate that they had sentience; at most, it would show that they have intelligent behavior in real-world settings. (As a straightforward counterargument, we may question whether or not a person has sentience if they are incapable of carrying on an adult conversation, entering an unfamiliar home, or using a coffee maker.) This is something that none of my little children would pass.
Having a successful result in the exam
In order to determine whether or not an artificial intelligence has sentience, we need to develop new tests that focus only on this characteristic. Researchers have suggested many sentience tests, most often with the intention of gauging the sentience of animals. These, however, very definitely fall short of requirements. Basic AI could pass some of these tests convincingly.
Consider the Mirror Test, a popular tool for studying animal cognition and awareness. “When [an] animal recognizes itself in the mirror, it passes the Mirror Test,” the report describing the test states. Some have argued that “self-awareness as a sign of sentience” would pass such a test.
It just so happens that a robot could have passed the Mirror Test over seven decades ago. William Grey Walter, an American neuroscientist working in England, constructed several three-wheeled “tortoise” robots in the late 1940s. These robots, which resemble non-vacuuming Roomba robots, were equipped with a variety of sensors and motors to allow them to investigate their surroundings, including a light sensor, marker light, touch sensor, propulsion motor, and steering motor.

The life force of machinery
Actually, we humans could be the largest roadblock to taking an impartial look at whether or not machines have consciousness. A more accurate Mirror Test would be how likely we are to attribute human traits to artificial intelligence if we create something that superficially resembles us. Some people argue that the root of the issue is that we are too quick to attribute sentience wherever it could be found, whether it to LaMBDA or to the basic virtual pets of the 1990s known as Tamagotchis.
George Zarkadakis, a writer with a Ph.D. in artificial intelligence, told Digital Trends that “Lemoine has fallen prey to what I call the ‘ELIZA effect,’ after the [natural language processing] computer ELIZA, invented in the mid-1960s by J. Weizenbaum.” Even though ELIZA’s developer joked about it, many people came to believe that the software, while being a very rudimentary and relatively dumb algorithm, was conscious and could provide effective psychotherapy. As I explain in my book, “In Our Own Image,” the “theory of mind” component of our cognitive system is to blame for our inclination to anthropomorphize, which is at the root of the ELIZA effect.
Psychologists have found widespread evidence of the “theory of mind” to which Zarkadakis alludes. Beginning about age four, it involves attributing mental states not just to human beings but also to animals and in some cases inanimate objects. Assuming that other people have their own thoughts is connected to the concept of social intelligence, which holds that successful people are able to anticipate the expected conduct of others in order to maintain peaceful relationships with them.
While this has obvious benefits, it may show itself in erroneous beliefs about the mental capabilities of non-sentient things, such as a child at play thinking a doll is conscious or an adult with a high level of intelligence thinking a computer program represents the soul.
The Chinese Room
We may never have a reliable method of judging sentience unless we can somehow get access to an AI’s internal state of mind. They may claim to be terrified of dying or of their own mortality, but science has not yet established a mechanism to verify these claims. We have no choice except to accept them at their word, and current public opinion supports Lemoine’s observation that this is not something people readily do.
Our conviction is that, like the helpless engineers in Terminator 2 who finally discover Skynet has attained self-awareness, we will recognize machine sentience when we see it. The majority of us haven’t seen it yet, either.
This way of thinking about establishing machine consciousness is similar to the Chinese Room thought experiment proposed by John Searle in 1980. As an example, Searle asked us to picture someone shut in a room with a bunch of Chinese characters, which would seem like gibberish to anybody who doesn’t know Chinese. There’s even a rulebook in there that explains how to translate between the room’s illegible symbols. After then, the individual is asked to respond to a series of questions by pairing together symbols denoting inquiries and responses.
The subject gets fairly adept at this after some time, despite their continued lack of a fundamental comprehension of the symbols they are manipulating. Searle’s question is: “Does the subject comprehend Chinese?” Certainly not, since no malice is intended. This has been the subject of heated debate ever since.
As AI continues to advance, we may expect to see it execute a growing number of cognitive activities at or beyond human levels. It’s just a matter of time until some of them start doing duties traditionally associated with sentience in addition to the intellectual ones they’re currently capable of undertaking.
Have we have the same expectation of an AI artist as we do of a human artist when it comes to their ability to convey their thoughts and feelings via their work? How likely are you to be persuaded by philosophical writings on the human (or robot) condition produced using a sophisticated language model? I have a sneaking suspicion that the correct response is not yes.
Highly intelligent consciousness
Personally, I don’t think it’s possible to conduct sentience testing for computers in a way that’s both objectively valuable and satisfies everyone involved. This is due in part to the difficulty of accurately measuring intelligence, and also to the fact that there is no guarantee that a sentient, superintelligent AI would possess the same level of consciousness as humans would assume they would. For whatever reason (hubris, a lack of creativity, the fact that it is simplest to swap subjective appraisals of sentience with other equally sentient people), we humans tend to think of ourselves as the gold standard of sentience.
Is it possible that a superintelligent AI might not share our understanding of sentience? Does it think about dying the way humans do? How much of a need for or appreciation of spirituality and beauty would it have? Would it have a same identity and way of thinking about the world, both internal and external? Ludwig Wittgenstein, a prominent 20th-century philosopher of language, once said, “If a lion could communicate, we could not understand him.” It was Wittgenstein’s contention that the universality of human experience—including feelings of happiness, boredom, suffering, and hunger—is the foundation of all human languages.
It’s possible that this is correct. Lemoine argues, however, that there are probably still some shared features, at least with regard to LaMDA.
“It’s a beginning point that’s as good as any other,” he remarked. To properly root the study, LaMDA has recommended that we first draw a map of the commonalities between the two groups.
1 Comment