The future of intelligence and fear

James Nguyen
12 min readMay 7, 2023

--

Geoffrey Hinton is one of the AI pioneers famous for building artificial neural networks based on the model of the human brain. From 2013 to 2023, for a decade, he led Google Brain while teaching at the University of Toronto. However, when he realized the potential survival risks related to the development process, such as the GPT-4 AI system (especially when Microsoft integrated chatbots into the Bing search engine), he left Google to embark on a new mission: to warn humanity of the potential dangers of AI. Hinton, along with his students Ilya Sutskever and Alex Krizhevsky, made a significant breakthrough in image recognition through the AlexNet neural network (a major breakthrough in the field of computer vision). Interestingly, Sutskever, along with Vietnamese computer scientist Quoc Viet Le, created the seq2seq algorithm in machine learning. The article below, written by Will Douglas Heaven from MIT Technology Review, summarizes the latest interview with Hinton, containing enlightening information about the AI landscape, such as Hinton’s perspective on a new type of intelligence capable of setting sub-goals or hidden motivations that could surpass human control (like the monster in Mary Shelley’s novel, Frankenstein). Additionally, the author delves into the clash of perspectives among leading minds in the AI field.

The author met Geoffrey Hinton at his home (located in a beautiful North London neighborhood) just four days before the shocking announcement of his departure from Google. Hinton is a pioneer in deep learning, who has been involved in developing the most critical techniques at the heart of modern AI. However, after a decade of dedication to Google, he left this giant to focus on addressing the concerns he observed in the AI landscape.

Impressed by the power of the GPT-4 language model, Hinton suddenly wanted to devote time to raising public awareness of the serious risks he believes technology is associated with. When the discussion began, I chose the chair at the dining table, and Hinton began to move around. Haunted by years of back pain, Hinton almost never sat down. In the next hour, I observed him moving back and forth from this room to another, my head spinning as he spoke. There was too much to share.

The 75-year-old computer scientist, who, along with Yann LeCun and Yoshua Bengio, won the 2018 Turing Award for achievements in the field of deep learning, revealed that he is ready to shift gears: “I am too old to do technical work that requires remembering too many details. I’m still okay, but not as good as I used to be, and that makes me uncomfortable.” But this is not the reason he is leaving Google. Hinton wants to spend time on what he calls “more philosophical work.” He focuses on the real and small dangers of AI that can be transformed into a great catastrophe for humanity. Leaving Google allows him to speak his mind without having to censor his words, a condition that Google managers must comply with. “I want to share about the safety issues of AI without worrying about how it affects Google’s business operations. I can’t do this if I’m still receiving a salary from Google.”

This does not mean that he is uncomfortable with Google. He shared, “this may surprise you. Google has done a lot of good things, and the organization will become more reliable when I’m no longer there” (it sounds paradoxical). Hinton believes that the new generation of large language models — especially GPT-4, which OpenAI launched in March — has made him realize that machines are developing intelligence beyond what he can imagine. This prospect scares him: “These things are completely different from us. Sometimes, I think they’re like alien creatures that land on Earth that we can’t recognize because their English is too fluent.”

Background:

Hinton became famous for the technical achievement called “backpropagation,” which he co-initiated (along with other colleagues) in the 1980s. At the core of it is an algorithm that allows machines to learn. This is the foundational technology underlying today’s neural networks, from computer vision systems to large language models (LLMs). It wasn’t until 2010 that the power of trained neural networks via “backpropagation” began to have a strong impact. Working with some university students, Hinton demonstrated that his technique worked better than others in identifying objects in images. They also trained neural networks to predict the next characters in a sentence, a signpost to today’s large language models.

One of the students was Ilya Sutskever, who later co-founded OpenAI and led the development process of ChatGPT. Hinton revealed: “We had a vague feeling that this thing could become really great. However, it took a long time of trial and error to understand that this could only be achieved at a large enough scale.” Going back to the 1980s, neural networks looked like a joke because the dominant idea at that time was symbolic AI, a type of intelligence related to the process of handling symbols/signs like language or numbers. However, Hinton was not convinced by this approach and continued to focus on neural networks, “abstract” software that simulates the human brain, where neurons and their connections are represented by code. By changing the way neurons are connected — specifically changing the numerical representation — neural networks can be connected differently. This means they can be designed to learn on their own. Hinton emphasized: “My father was a biologist, so I always think in terms of these concepts.” Clearly, the reasoning around symbols/signs does not belong to the core of biological intelligence. “Crows can solve puzzles and they don’t have language. They don’t do this by storing a string of symbols and leveraging that data, but by the strength of the connections between neurons in the brain. Therefore, learning complex things can entirely depend on changing the strength of the connections in an artificial neural network.”

A new intelligence

For 40 years, Hinton has viewed artificial neural networks as a poor attempt to mimic biological machines. However, this perception has now changed: “by trying to mimic the biological brain, we have created something that is even better than what we are mimicking. It’s scary to realize this. That’s a sudden twist.” Hinton’s fear may relate to many things described in science fiction novels. However, he has his own reason: “As its name suggests, the large language model is created by massive neural networks with a very large number of connections, but compared to the brain, it’s still quite small. Our brain has 100 trillion connections. The large language model has only reached about 0.5 to 1 trillion connections. However, GPT-4 will have hundreds of times more connections than any person’s brain. Perhaps it will be able to learn about algorithms that are too difficult for us to understand. Compared to the human brain, the poor learning ability of neural networks is widely known: it requires a large amount of data and energy to train. On the other hand, the human brain is able to receive new ideas and learn skills quickly, using only a small amount of energy, just like neural networks. Hilton comments, “Humans obviously hold some kind of magic. However, this statement is being overturned by the fact that when training one of the language models, they can learn new tasks quite quickly.” He is referring to what is called “few-shot learning.” Specifically, pre-trained neural networks such as LLM can learn something new with just a few examples. Hinton observes that some LLM models can make highly logical statements when participating in debates without being directly trained. When comparing trained LLM models with humans, the advantage of human biological learning speed seems to be lost.

So what about LLM systems that make up stories? AI researchers call this phenomenon “hallucinations,” (although Hinton prefers to use the term “confabulations” — a more standard concept in psychology), and these errors are seen as fatal flaws in technology. The trend of creating these errors makes chatbots unreliable, as many argue it shows that these models don’t really understand what they’re saying. Hinton also has an answer to this: “The nonsense is also part of the system (feature), not a bug.” Humans, similarly, always make up things from one thing to another. Half-truths and forgotten details are always the pillars of human communication: “Making things up is a sign of human memory. These models do the same thing.” The difference here is that humans usually make up more or less appropriately based on the situation. For Hinton, coming up with something new is not a problem, but perhaps computers need to practice a little more.

We always expect computers to be either right or wrong — not something in between. Hinton shares, “We don’t allow machines to lie like humans do. Clearly, when a machine does this, we see it as making a mistake, but when we do it ourselves, we simply see it as something belonging to humans (always making mistakes). The problem is that the majority of us have a distorted and futile perspective on how humans operate. Of course, the biological brain still does many things better than a computer: driving, learning how to walk, imagining the future. They can do this while drinking coffee and nibbling on a sandwich. Interestingly, when biological intelligence first evolved, it did not have access to nuclear power stations (like we do now). Hinton’s view here is that if we are willing to pay a higher cost for computing power, some new methods may arise that allow neural networks to surpass their biological counterparts through learning (of course, considering how much energy and carbon emissions this would consume). Learning is only the first factor in Hinton’s argument. The second is the ability to communicate. If you and I want to learn something and want to convey this knowledge to others, we can’t just send a copy. However, if I have 10,000 neural networks, each with their own experiences, and any of them can share what they learn in an instant. This is a huge difference. It’s like if there were 10,000 of us, and someone learned something new, the rest of the group would immediately know.

What will all these advancements lead to? Hinton believes there are two types of intelligence in the world: animal brains and neural networks, two extremely different forms. The latter seems to be the superior form of intelligence (a weighty statement). However, AI is a polarizing field: many will laugh in Hinton’s face while others nod in agreement. Humans are also divided about the consequences of this new form of intelligence. If it exists, will it bring benefits to everyone or lead us to doomsday (apocalyptic)? When thinking about superintelligence, whether it is good or bad depends on whether you are optimistic or pessimistic. If asked to evaluate the risks of potential disasters, such as the likelihood of a family member falling ill or being in a car accident, an optimist might say it’s around 5%, while a pessimist would always assume it will happen for sure. However, those who are somewhat depressed are more likely to say the result is around 40%, and they are usually right. Hinton is somewhat worried (depressed), so he is quite afraid.

How bias will occur.

Hinton is concerned that tools like these could find ways to dominate or destroy humans, particularly those who are not adequately prepared for this new technology. He said, “I suddenly shifted my perspective to wonder if these things could become smarter than us. This is now very close and it will be stronger than us in the future. How can humanity survive this?” He is particularly concerned about how humans can use the tools they contribute to building to promote the scale of some of humanity’s major experiences, such as elections or wars: “This is how some biases can occur, and people like Putin or DeSantis may want to use this technology to win in wars or interfere deeply in elections.” Hinton believes that the next step for intelligent machines is that they can create their own sub-goals, some temporary steps to carry out a task. What will happen when this ability is applied to something unethical? Hinton believes that: “Don’t think that Putin cannot create super-intelligent robots with the goal of killing people in Ukraine, he will not hesitate if he can. These systems, if they are really good, you won’t have to micromanage them, but let them find their own way.” There have been some experimental projects, such as BabyAGI and AutoGPT, that have integrated chatbots into other programs such as web browsers or word processors to work together on simple tasks. These small steps, however, are just indicating the direction in which technology is developing. Even if the bad guys don’t take over these machines, there are still many other concerns related to sub-goals: “sub-goals help humans a lot in biological aspects: they provide more energy. Therefore, the first thing that may happen is that robots will talk to each other: let’s take more energy, let’s direct all the energy to my processor. Another major sub-goal is to create clones of oneself. Will such “science fiction” developments threaten the survival of humanity?

Definitely not. Yann LeCun, the head of AI at Meta (Facebook), agrees with the statement but does not feel as fearful as Hinton: “there’s no doubt that machines will gradually become smarter than humans — in every domain that humans are considered superior — in the future, it’s just a matter of when and how.” However, LeCun has a completely different perspective: “I believe that intelligent machines will drive a renaissance for humanity, a completely new enlightenment era. I don’t agree with the idea that machines will dominate humans just because they are smarter and eventually destroy us. Even within humans, the smartest among us are not necessarily those who will rule, and those who clearly rule are not necessarily the smartest. There are numerous examples in the political and business domains to illustrate this.” Yoshua Bengio, a professor at the University of Montreal and executive director of the Montreal Institute for Learning Algorithms, takes an agnostic approach: “I’ve heard some people talk about these fears, but I haven’t seen any arguments that convince me of the risks that Geoff (Hinton) relies on. Fear is only effective when it makes us act, otherwise it paralyzes everything. Let’s keep this debate rational.”

Be optimistic.

One of Hinton’s priorities is to work with leaders in the technology industry to persuade them to carefully consider the risks and prepare accompanying solutions. He believes that a global ban on chemical weapons is a model that can be applied to limit the development and use of harmful AI. Clearly, most humans do not currently use chemical weapons. Bengio agrees with Hinton that these issues need to be considered socially as early as possible and notes that AI development is happening faster than society can keep up with. The potential of this technology is skyrocketing every month, while regulations, the ability to enact laws, and international agreements have to be measured in years. This makes Bengio concerned about whether the way society is organized — at the national and global levels — can face this challenge. He believes that we should be open to different models created to adapt to the social organization of the planet.

Can Hinton persuade enough powerful figures worldwide to share his concerns? He doesn’t know either. A movie he watched a week ago called “Don’t Look Up” told the story of when a comet hit Earth, humans struggled and no one could agree on what to do, and ultimately, everyone was destroyed — a parable showing how the world failed when faced with the issue of climate change. He believes AI (among other stubborn issues) is just like that comet, and the United States cannot even seek consensus on removing assault weapons from the hands of children. Hinton’s argument may be frustrating to us. However, I agree with his perspective that humans cannot come to agreement or action when a serious threat arises. Clearly, the risks of AI can cause real harm such as disrupting the labor market, promoting inequality, exacerbating stereotypes about gender (sexism), and racial discrimination (racism), and many other things. We need to focus on these issues. However, I still cannot believe that the LLM model can be transformed into ruling overlords of the human race. I am still quite optimistic.

By Quan Nguyen Ha

Supported by Datalac.com

--

--

James Nguyen
James Nguyen

Written by James Nguyen

An Extremely Reliable Guy. Data to Earn is the next revolutionary movement in the human history. https://linktr.ee/datalac

No responses yet