A dilemma of self-identity brought on by artificial intelligence and the fallacy of “mind uploading.”

A dilemma of self-identity brought on by artificial intelligence and the fallacy of “mind uploading.”

Despite our biological constraints, many “futurists” believe that in the near future we will be able to “upload our thoughts” into computer systems and “live forever.” Although it is fundamentally erroneous, this idea has been gaining traction in recent years. In fact, the concept is so pervasive that Amazon has a TV show titled Upload based on it, and there are innumerable more cultural allusions to the same idea.

The idea of “mind uploading” originates from the sound assumption that, with enough computer power dedicated to the challenge, the human brain may be represented in software, just like any other system that obeys the laws of physics. Please understand that the goal of mind-uploading is not to imitate the human brain in general, but rather to simulate the mind of a single individual, down to the last neuron and connection.

How plausible is that?

Naturally, this is a really difficult assignment. Your brain has around 85 billion neurons, each of which is connected to thousands of others. That’s a thousand times more interconnections than stars in the Milky Way galaxy, or around 100 trillion. All of your unique characteristics—characteristics of your personality and memories, fears and abilities and aspirations—are the result of these trillions of links. To create a digital copy of your brain (an infomorph), a computer system would need to recreate the overwhelming majority of these connections, right down to their most minute interactions.

Such complex modeling cannot be performed manually. “Mind uploading” believers of the future often see a robotic procedure including a supercharged MRI equipment that can record a person’s genetic biology. They then want to utilize AI software to create a simulation of each individual neuron and the hundreds of connections it has with other neurons based on this precise scan.

This is a very difficult assignment, yet it is possible in theory. Large numbers of simulated brains may conceivably dwell within a detailed simulation of physical reality, and this is not just a theory. In spite of this, the idea that “mind uploading” would allow any biological person to live longer is fundamentally wrong.

The fundamental problem is in the emphasis placed on the words “their lives” in the preceding phrase. With significant technology advancements, it is theoretically conceivable to clone and recreate the shape and function of a distinct human brain inside a computer simulation; yet, the individual who was copied would still live in their biological body. Their brain would still remain inside their skull, protected from harm.

The digital version of a human would be an imposter.

To put it another way, if you voluntarily underwent “mind uploading,” you wouldn’t feel like you were abruptly teleported into a computer simulation. To the contrary, you wouldn’t experience any sensations. You may have had your brain copied while you slept or under the influence of drugs and not had the least idea that a duplicate of your mind existed in a computer program.

The concept of a digital twin, or “you but not really you,” is closely related to the concept of mind uploading.

Similar to a digital clone or twin, the duplicate would not really be you. Imagine having a duplicate of your whole mental state, including all of your thoughts and experiences up to the time your brain was scanned. Once installed, however, the duplicate would begin creating its own memories inside the virtual environment. In doing so, it may gain knowledge and experience through interacting with other similar robots. Or maybe robotic interfaces would allow it to engage with the real environment. Your physical self would be learning and adapting at the same time.

Therefore, the differences between your real brain and its artificial replica would become apparent very instantly. For a split second, they would mirror each other before diverging. As a result, you and your partner would have different levels of expertise and ability. Your perspectives and insights would be different. Inconsistencies between your character and your aims are to be expected. Significant shifts would occur within a few years. Nonetheless, both would “feel like the genuine you.”

This is a major consideration; your duplicate would experience the same sense of uniqueness that you do. It would think it has the same right to live where it wants, work where it wants, and make its own choices as anyone else. Both you and the copy would feel equally attached to the name, leading to an argument over who gets to use it.

If I built a replica of myself, it would wake up in a simulated environment and completely think it was the genuine Louis Barry Rosenberg, a lifetime technologist. If it were able to interact with the actual world by robotic methods, the clone would feel that it had every right to reside in my home and drive my vehicle and go to my work. After all, the copy would remember buying that house and getting that job and doing everything else that I can remember doing.

This process of “mind uploading” does not result in an eternal copy of yourself. Instead, it would give rise to an adversary who is functionally indistinguishable from you except in name, and who has all of your same abilities, memories, and a sense of entitlement to your identity.

And yes, the duplicate would feel as much a part of your family as you do. In fact, if this technology was attainable, we might envision the digital copy suing you for joint custody of your kids, or at least visiting rights.

To overcome the dilemma of making a replica of a human rather than providing digital immortality, some futurists advocate an other way. They propose that, rather than scanning and uploading a person’s mind to a computer, it might be possible to gradually convert a person’s brain, neuron by neuron, to a non-biological substrate. This is typically referred to as “cyborging” rather than “uploading” and is an even more complex technological endeavor than scanning and simulating. In addition, it’s questionable whether progressive replacement genuinely addresses the identification issue, so I’d call this option iffy at best.

And yet, despite all this, mind uploading is hardly the certain route to immortality portrayed in fiction. It’s probably a method to make a copy that responds the same way you would if you woke up one day to the news that your spouse wasn’t really your husband, your children weren’t really your children, and your work wasn’t really your job.

Is it something you’d like someone else to do to a replica of yourself?

That is really unethical in my opinion. Because it’s so immoral, I published a warning visual book about it over a decade ago called UPGRADE. The events of the novel occur in a near future when people spend their whole waking hours online.

What the people of this planet don’t know is that an artificial intelligence system is constantly monitoring their every move in the metaverse and creating a behavioral digital model of their thoughts (no scanning required). Once the profiles are done, the fictitious AI persuade individuals to “upgrade themselves” by committing themselves and letting their digital duplicates take over their lives.

It was with irony in mind that I authored the book over a decade ago. However, there is a new area of study that is moving in this exact direction right now. A growing sector of the economy, euphemistically dubbed the “digital afterlife business,” seeks to “digitize” deceased loved ones so that their surviving family members may continue to communicate with them even after death. Even more start-ups are interested in creating a profile of your metaverse activities so you may “live eternally” in their digital product. Even Amazon has entered the market, showing off how Alexa can recreate your grandmother’s voice so you can finally hear her again.

How long until a business starts promoting the cost-saving advantages of terminating your life early and letting your digital substitute to continue living? I worry that it is really a question of time. My only hope is that entrepreneurs would tell the world the truth about mind uploading — that it is not a road to immortality.

At least, contrary to what many people believe.

Louis Rosenberg, Ph.D., is a pioneer in the disciplines of virtual reality, augmented reality, and artificial intelligence. He obtained a Ph.D. from Stanford University, was granted more than 300 patents, and started a number of successful businesses. Rosenberg started his career at the Air Force Research Laboratory, where he created the first fully working augmented reality system that combined the real and virtual worlds. Rosenberg is the current chief executive officer of Unanimous AI, the chief scientist of the Responsible Metaverse Alliance, and the worldwide technological adviser to the XR Safety Initiative (XRSI).

Related Articles

Leave a Reply

Your email address will not be published.