Should we be more concerned about the potential dangers of dumb AI than of advanced AI?

Currently, we are in the year 2052. Nuclear power has been widely adopted as the primary source of electricity, saving the world from a catastrophic climate change event. In today’s accepted opinion, nuclear power plants are a complicated challenge, and Three Mile Island is used more as a joke than a warning. Improving software automation has been the primary factor in calming public concerns about nuclear waste and plant explosions. Unbeknownst to us was that despite coming from a variety of manufacturers throughout the globe, all nuclear power plant software is fundamentally biased. Several unconnected plants die in the same year after functioning normally for twenty years. It has come to the attention of the CEOs of nuclear power companies that all of the people with experience operating Class IV nuclear power plants are either no longer working in the industry or have retired. We are at a crossroads where progress and safety both have consequences.
Today, everyone seems to be talking about AI. Machine learning has emerged from its multi-decade “AI winter” to discover a world of technological developments like as reinforcement learning, transformers, and more, together with computing resources that are now fully baked and can make advantage of these advancements.

The rise of AI has not gone ignored and has actually generated quite a bit of discussion. A lot of the time, those who are terrified about AI are the ones who are talking. Ethical AI researchers concerned about prejudice and rationalists pondering the possibility of human extinction are among these groups of individuals. In particular, they worry about AI that is either too complex to comprehend or too advanced for human control, both of which would work against our original intentions. In most cases, proponents of AI will answer with a technologically optimistic stance. They contend that these pessimists are completely off base, using both theoretical justifications and empirical evidence for AI’s positive contributions so far to support their claims that the technology will continue to benefit humanity in the years to come.
There is a fundamental misunderstanding at the heart of each of these arguments. In the foreseeable future, robust artificial intelligence will not exist in a purely virtual form. Instead, we are facing a worse threat that is present now and will only grow in severity: widespread use of artificial intelligence before it has been adequately tested. As a result, the danger we face is not from artificial intelligence that is too advanced, but from AI that is too simplistic. A kind but naive AI, like the one shown in the above illustration, poses the biggest threat to humanity. Meanwhile, we choose to look the other way.
There is already dumb AI available.
The primary reason why weak AI poses a greater threat than strong AI is because the former already exists, while the latter’s feasibility is still up in the air. The biggest risk of AI is that humans will assume too soon that they comprehend it, as Eliezer Yudkowsky phrased it.
True AI is now in use everywhere from factories to translation agencies. McKinsey found that 70% of businesses who used AI saw an increase in revenue as a result. These are not inconsequential uses of AI either; the technology is already being put to work in a wide variety of contexts where its presence would have a significant impact but which the general public believes is still some distance off in the future.
Despite the lack of an autonomous weapons treaty, the United States military has already begun using autonomous weapons (particularly, quadcopter mines) that can kill without human intervention. Before it was pulled for sexism, Amazon really used an AI-powered resume sorting tool. Actual police agencies’ use of facial recognition technology leads to unjustified arrests. Even though they are used in hospitals throughout the United States, Epic System’s sepsis prediction tools regularly get it wrong. IBM also lost a $62 million contract in clinical radiography because of its “unsafe and inaccurate” suggestions.
Prominent scholars such as Michael Jordan have raised the obvious point that these are instances of machine learning rather than AI and that the names should not be used interchangeably. The central argument is that machine learning systems are not actually intelligent for a variety of reasons, including their incapacity to respond to novel circumstances or remain stable in the face of even minor modifications. This is a valid criticism, but the fact that machine learning algorithms may succeed at challenging tasks with little to no human guidance is still significant. Of course, neither are we capable of flawless logic, but neither are they (if we were, presumably, we would never lose games to these imperfect programs like AlphaGo).
Dumb AI dangers are often avoided by various testing procedures. However, this falls apart because we often implement technologies that have been tested in easier areas where the margin for error is bigger. To rephrase, it would seem that although both the autopilot used by Tesla and the moderating tools used by Facebook are built on the same underlying technology of neural networks, the former are much too strict while the latter are far too lenient.
From what do the dangers of stupid AI arise?
To begin, there is a significant danger posed by AI that is created using essentially sound technology but is then used in an entirely inappropriate manner. In other areas, poor habits have almost taken over the whole landscape. For instance, in the field of microbiome research, one meta-analysis discovered that 88 percent of the articles included in its sample included errors so severe that they could not possibly be relied upon. The fact that there are many more use cases than there are individuals who are knowledgeable about how to properly create AI systems or how to deploy and monitor them is a source of special concern as artificial intelligence becomes more broadly used.
The issue of latent prejudice is another significant one. In this context, “bias” does not simply refer to the practice of discriminating against members of underrepresented groups; rather, it refers to the more technical concept of a model exhibiting behavior that was unanticipated but is always biased in a specific direction. It is possible for bias to originate from a variety of sources, including an inadequate training set, an unintended incentive in the fitness function, or even a subtle implication of the mathematics. For instance, the fact that any algorithm for filtering content on social media produces a bias against outrageous conduct should give us pause. This is true regardless of whatever firm, nation, or institution was responsible for developing the model in question. There is a possibility that there are a great number of other model biases that we have not yet found; the major danger is that these biases may have a lengthy feedback cycle and only be detectable at scale; if this is the case, then we won’t become aware of it until production has already caused significant harm.
Models with this kind of hidden danger may potentially be used too broadly. According to Stanford’s Percy Liang, “foundational models” are being used extensively, and this means that problems in these models might have knock-on effects farther down the line. The nuclear explosion scenario presented at the essay’s outset serves as an example of this sort of danger.
As we keep rolling out inept AI, our capacity to remedy it will erode. After the hacking of the Colonial Pipeline, the company’s chief executive officer lamented that the company was unable to convert to manual mode since the employees who had previously controlled the manual pipes had either retired or passed away, a practice known as “deskilling.” Teaching a manual alternative, such as celestial navigation to Navy sailors in case of GPS failure, is desirable in certain cases, but becoming more impractical as society automates more and more tasks. The danger of “industrial tiredness,” as Samo Burja puts it, arises when more and more people forget how to do things for themselves.
Better AI, not less AI, is the answer.
How does this impact the future of AI, and what steps should be taken next?
Artificial intelligence will not disappear. And its widespread use is only expected to increase. Any solution to the problem of dumb AI must address both the immediate and intermediate problems outlined above, and the more permanent solutions that will eliminate the issue, at least in the absence of strong AI.
As luck would have it, many of these issues may inspire new businesses. The AI industry might be worth more than $60 billion and grow at a CAGR of 40%, but these numbers are only ballpark estimates. Each issue in such a massive industry has the potential to become a multibillion-dollar business.
The first major problem is subpar AI that has been developed or deployed in a way that goes against industry standards. There should be a General Assembly for AI that is responsible for improving education and training in the field, both at the university level and for professionals. SaaS firms that undertake the heavy lifting may address many fundamental problems, from the correct implementation of k-fold validation through production deployment. Every one of these issues is sufficiently large to warrant a separate firm’s attention.
Data is the next major problem. Large amounts of data are required for both training and testing models, regardless of whether your system is supervised or unsupervised (or even symbolic!). It’s not only collecting the data that may be challenging; categorizing, detecting bias, ensuring completeness, and so on can all be just as time-consuming and frustrating. While Scale.ai’s success shows that such businesses may succeed, the company still has work to do, such as gathering ex-post performance data for model adjustment and audits.
Last but not least, we need to enhance true AI. The lack of research and new companies that work to improve AI should be feared, not their presence. It’s not AI that’s too good, but AI that’s poor, that poses the biggest threat. This necessitates funding for new fundamental models, methods to reduce the quantity of data required to develop successful models, and more. Making models more auditable by improving their explainability and scrutability should also be a major emphasis of this effort. Many of these innovations will need investment in research and development (R&D) from current businesses as well as funding for colleges.
Yet, we need to exercise caution. It’s possible that our attempts at solving the situation will just make things worse. While transfer learning has the ability to reduce mistakes by enabling various learning agents to communicate their progress, it also carries the risk of spreading bias or faulty measurements. The costs and advantages must be weighed carefully. Numerous AI tools may be a huge help. They improve the quality of mobile photos, make it easier for the handicapped to get about town, and enable better and freer translation. Let’s not toss away the baby with the bathwater.
There’s no need to sound the alarm, either. We are too quick to judge new technologies harshly, and this is typically the case when it comes to artificial intelligence and its mistakes. Human error rates for police lineups may reach 39%, yet the ACLU discovered that Congressman John Lewis was wrongly included in a face recognition mugshot. Congressman Lewis’s position as an American hero is often exploited as a “gotcha” for techniques like Rekognition. Similarly, when Tesla batteries catch fire, it is a major setback, but overall, fires in electric vehicles are considerably less common than in internal combustion engine vehicles. The unknown might be unsettling, but traditionalists have no right to block progress.
1 Comment