DATALAC: How to contemplate wisely about AI?
Lex Fridman, an AI researcher at MIT (who is also the host of a famous podcast with his name), after a discussion with Sam Altman (the founder of OpenAI, the organization behind ChatGPT), sent a warning message about AI to the audience: “This is a crucial moment in the history of human civilization, a fundamental societal transformation that nobody can predict when and how it will happen, probably within our lifetime. The collective intelligence of humans is becoming pale in comparison with the general super intelligence being built in large-scale AI systems. This is fascinating but also extremely scary. It is fascinating because of a range of useful applications for humans that we already know (like ChatGPT) and don’t know yet, which can enhance human power, promote prosperity and reduce poverty, and facilitate the pursuit of happiness. It is scary because AGI can harness its power to intentionally or unintentionally destroy human civilization. It can extinguish the human spirit as depicted in Orwell’s 1984 or create a mass hysteria in Huxley’s “Brave New World,” a dystopian world where humans have to love those who oppress them and sink into technologies that erode individual thinking abilities, especially the ability to reason.” Lex emphasized the importance of discussions with experts, engineers, and philosophers around the issue of AI, both optimists and pessimists, at this moment. Specifically, Lex has had discussions with Max Tegmark, the chairman of the Future of Life Institute, who proposed an open letter calling for a halt to the development of AI systems within six months — tools that are more powerful than GPT-4 and can be applied on a larger scale. Humanity needs to carefully evaluate the risks and opportunities involved. The following article from “The Economist” discusses the big picture of ongoing debates and policy initiatives of governments on AI, which is worth paying attention to.
The open letter from the Future of Life Institute, a non-profit organization, raised some tough questions: “Should we automate all jobs, including those that are most fulfilling to humans? Should we develop non-human brains that are superior in computation or intelligence… and gradually replace us? Should we take risks with things that could cause humans to lose control over civilization?” Each word in the letter called for a “pause” in the development of higher forms of artificial intelligence (AI) for a period of six months. The message was signed by tech giants, including Elon Musk. This is the most prominent example of the rapid technological advancement in AI that has sparked concerns about potential threats to humanity.
Specifically, the new Large Language Models (LLMs) like ChatGPT (created by the startup OpenAI) have amazed creators with their unexpected capabilities as the model scales up. For example, they can solve logic puzzles or write computer code, or determine the name of a movie from a set of clues accompanied by an emoji.
These models have completely transformed the relationship between humans and computers, their ability to collect knowledge, and even with ourselves. AI supporters emphasize the potential to solve major problems such as developing new drugs, designing new materials to combat climate change, or untangling the complexity of nuclear fusion power. For the remaining group, the superior ability of AI over humans has led to dystopian scenarios or fears mentioned in science fiction novels: a kind of machine domination over humans.
The messy mix of excitement and fear makes it difficult for us to evaluate the opportunities and risks involved. However, there are always lessons to be learned from other industries and technological shifts in the past. What has changed to make AI increasingly powerful? Should humans be afraid? What should governments do?
The first wave of modern AI systems, launched a decade ago, relied on carefully labeled training data. Once exposed to enough examples, the system could learn to do things like recognize images or replicate conversations. Modern systems no longer require pre-labelling, as results can be trained through access to larger amounts of online data. Language models can be trained on the entire internet, which explains their power, which can lead to both good and bad outcomes.
This capability became even clearer when ChatGPT became widely accessible to the public last November. Nearly a million people used it in one week, and 100 million people used it in two months. This tool was quickly used to create school essays or wedding speeches. The popularity of ChatGPT and the trend of integrating AI into Microsoft’s search engine, Bing, quickly led to other competitors creating their own chatbots. Some tools have produced strange results. Bing Chat suggested that a journalist should leave his wife. ChatGPT was accused of insulting a law professor. Language models produce results that have some patina of truth but often contain factual errors or outright fabrications. Nevertheless, Microsoft, Google, and other tech companies have begun integrating language models into their products to help users create documents and perform other tasks.
Recent advances in AI power and popularity, coupled with increasing public awareness of its benefits and risks, have raised fears that technology is developing too quickly to be controlled. This has prompted calls to curb the development of AI, with concerns that AI threatens jobs, reputations, the ability to accurately discern information, and even the survival of humanity. Fear of machines taking away jobs has existed for many centuries. However, so far, new technologies have also created new jobs to replace those they eliminate. Machines can perform some tasks, but not all, and many other tasks require human hands. Has everything changed so much now? Can a major shift in the labor market occur due to artificial intelligence? Currently, there are no clear signs of this. Previous technologies have tended to replace low-skilled jobs, and now AI can perform some white-collar tasks such as document synthesis and writing code.
The extent of survival risk that AI poses is fiercely debated, with experts divided. In a survey conducted by AI researchers in 2022, 48% believe that at least 10% of AI will have extremely negative effects (such as causing human extinction). However, 25% say the risk is 0%; neutral researchers place the risk at 5%. The nightmare of advanced AI machines causing large-scale harm, such as manufacturing toxins or viruses, or persuading humans to carry out terrorist acts, needs to be eliminated of evil intent: researchers are concerned that future AI may have goals that diverge from its creators.
The worst-case scenarios need to be carefully considered, but it should be remembered that experts are also participating in the guessing game about this technological leap. Many believe that AI in the future may have unrestricted access to energy, money, and computing power — which currently have many limitations — so it is possible that some rogue AI models in the future may be prohibited from access. Furthermore, experts tend to overestimate the risk in this field compared to other forecasters (Mr. Musk, who is launching his own AI startup, is also interested in his rivals’ tools). Proposals for heavy regulation, or even a halt to AI research, seem to be over-reactions. A complete halt to the AI process is almost unenforceable.
Regulations are necessary, but they need to be based on sound reasoning rather than trying to save humanity. Existing AI systems have raised concerns about bias, privacy, and intellectual property. As technology advances, other issues will become apparent. The key is to balance the promises of technology with the associated risks and always be ready to adapt strongly.
Currently, governments are pursuing three approaches. At one end of the policy spectrum is Britain, which proposes a “light-touch” approach: no new regulations or regulatory frameworks, but only the application of existing regulations to AI. This is aimed at promoting investment and transforming Britain into an “AI superpower”. The US also has a similar approach, although the Biden administration is collecting information on how to establish a rulebook for AI from the public. The EU has chosen a stricter approach. These countries have proposed a number of laws specifically for different purposes of AI use, requiring continuous and rigorous monitoring processes and the disclosure of risk levels associated with different purposes — from music recommendations to self-driving cars. Some AI purposes have been banned, such as subliminal advertising and remote biometrics. Companies that dare to break the law will be heavily punished. For some critics, these regulations are too burdensome. Others see the need for stricter regulations, and the government should treat AI like a pharmaceutical, with specialized legal experts and stringent pre-approval testing before release to the public. China is doing this, with companies on the mainland having to register their AI products and undergo rigorous security reviews before release. Of course, safety is less important than political reasons: the Chinese government must ensure that the dazzling AI products reflect the core values of socialism.
In summary, what should we do? A light-touch approach is probably not enough. If AI is as important as cars, planes, and pharmaceuticals, then this reason is good enough for us to make efforts to come up with appropriate new laws. It seems that the EU’s approach is closest to this direction, although the AI classification system is still too extreme (based on fear) rather than principles-based, which requires flexibility. The most appropriate approach is probably to reasonably disclose information related to how the system is trained, operated, monitored, and controlled — this makes AI law similar to other industries. At the same time, it also allows for more stringent regulations over time if necessary. At that point, the role of passionate lawmakers will become more appropriate, along with intergovernmental agreements. Similar to how we confront nuclear weapons, which pose a threat to human survival. To dare to face this great risk (if any), governments can form an organization (body) modeled after CERN, a large laboratory specializing in researching physics particles, to delve deeper into the safety of AI or related ethical issues — this is an area that companies have little incentive to implement as society expects.
New technologies always contain risks, but they also bring extraordinary opportunities, how to bring the development process into balance. An approach based on a rigorous measurement process can provide a foundation for future regulations to be added, which needs to be built right now.
By Quan Nguyen Ha