DATALAC, ChatGPT and AGI
In the famous work “The Language Instinct,” psychologist Steven Pinker discusses the innate language capacity of humans, a unique ability formed through the evolutionary process to solve specific communication problems in primitive hunting and gathering communities. Human language ability is instinctual, similar to how spiders spin webs or oysters build shells. He then delves into how humans make rational and irrational decisions in his subsequent work, “How the Mind Works.” Specifically, what makes us happy, afraid, disgusted, captivated by something (such as a work of art), or fall in love. He analyzes how we perceive imponderables like morality, religion, and consciousness. These two books by Steven will help us understand more deeply the human mind or brain, and are worth reading. Steven Pinker also has some very interesting opinions on ChatGPT in “The Harvard Gazette,” the official news site of Harvard University where he teaches, about the “dangers” and great “benefits” that this AI technology brings to humanity (whether AI can catch up with the innate language capacity of humans to understand the world). I summarize his perspective below:
According to Steven Pinker, ChatGPT has really impressed the public and can continue to develop further until it no longer makes things up and makes fewer errors. In November of last year, scholars and the world were amazed when OpenAI launched a chatbot called ChatGPT. It has the ability to answer questions almost instantly (actually combining various forms of writing from different genres) in many fields in a conversational and somewhat authoritative manner. All thanks to the optimization process of a form of AI called the large language model (LLM).
ChatGPT has the ability to continue learning and improving its responses. However, the question to ask is: how much better can it be? Pinker has spent years delving into the connection between the mind, language, and reasoning abilities in his best-selling book “The Language Instinct,” and he has his own answer to the question of whether humans should worry about whether ChatGPT can replace humans, specifically writers or thinkers. ChatGPT has garnered attention widely, covering negative feelings to the public. This shows how our intuition has failed when trying to imagine how statistical patterns work when immersed in trillions of sentences and hundreds of billions of parameters. Like most people, Steven never believed there would be a system capable of writing the Gettysburg Address in the style of Donald Trump (the most famous speech of US President Abraham Lincoln). Humans can hardly process the patterns of the patterns of the data patterns or the layers of data. It is amazing that ChatGPT can create plausible, logical, and well-structured prose without real knowledge of this world, specifically, it does not have public goals, represented facts, or what we think is necessary to create intelligent-sounding prose.
The capacity for intelligence only serves to make its serious blunders more prominent. ChatGPT has been quite confident in its confabulations, such as the one about there being four female U.S. Presidents, including Luci Baines Johnson (who actually held power from 1973–77, as the youngest daughter of Lyndon B. Johnson). It has also made other elementary mistakes. For the past 25 years, in his psychology courses, Steven has only shown students the best artificial intelligence models that cannot replicate the simplest “common sense” feelings. The emergence of ChatGPT has made him wonder if his lectures have become obsolete. However, Steven also doesn’t have to worry. When asked by ChatGPT, “if Mabel is still alive at 9am and 5pm, will she still be alive at noon?” The model answered: “it cannot be determined whether Mabel is still alive at noon or not. She is known to be alive at 9 and 5 o’clock, but there is no information that she is still alive at noon.” This answer shows that ChatGPT cannot understand the basic facts of this world — such as humans living in continuous stretches of time, and once we die, we remain dead forever (or it may believe in the theory of reincarnation) — perhaps it has not yet touched or searched for the texts (in databases or the internet) that clearly explain these basics (like Steven knowing that goldfish cannot wear underwear).
We are dealing with an alien intelligence capable of performing miraculous feats, but not in the way of the human mind. Humans don’t need to access half a trillion words in texts (specifically, reading three words a second for eight hours a day would take 15,000 years to process) to be able to speak or solve problems. Nevertheless, it’s amazing that humans can see many highly ordered statistical patterns in a huge dataset. OpenAI’s goal of creating “artificial general intelligence” (AGI — a machine that can understand the world and learn how to perform a wide range of tasks like any human) sounds impossible, and the idea of a “general machine” seems absurd. We can imagine many types of superpowers, such as Superman’s ability to fly, invincibility, and X-ray vision, but that doesn’t mean they can be realized. Similarly, we can imagine a superintelligence that could make humans immortal, bring peace to the world, or conquer the universe. But real intelligence must include a set of algorithms to solve specific problems in different worlds. What we have now, and perhaps always will have, are devices that can outperform humans in some challenges and not in others.
The use of ChatGPT in schools is not a cause for concern, as it is no different from downloading materials from the internet. Universities have required professors to remind students that the “honor pledge” (a commitment not to use or receive any illegitimate help to complete their thesis/project or essay) is extremely important and not to claim credit for work they did not create. Of course, Steven is not naive; he knows that some Harvard students appear to cheat, but believes that it is not too prevalent. At least for now, ChatGPT is too easy to expose, as it only has the ability to mix and reference quotes that have never existed before. Fear of new technologies is always driven by the prospect of worst-case scenarios, without considering preventive or countermeasures in the real world. For large language models (LLMs), this process includes doubts about whether humans will use them to create automatic content (journalists have stopped using the GPT trick to write articles about GPT because readers have begun to complain), the development of “ethical” and “professional” guardrails — such as the Harvard honor pledge (a commitment not to steal intellectual property, even from ChatGPT), and perhaps new technologies that can detect or print hidden text in LLM-generated content.
There are also other forces of resistance. One of them is that we have a very deep intuition about causal connections with human beings. A collector may pay $100k for John F. Kennedy’s golf club even though it is no different from other clubs of the same era. The demand for authenticity in intellectual products such as stories and commentaries is increasing: the awareness that there are real humans with whom we can connect will help elevate the story and its acceptance with the public. Serious errors such as forehead-slapping AI models also provoke resistance from the public, like crushed glass becoming a popular supplement or seven women capable of producing a baby in one month. As the system improves based on user feedback (often from clicks taken from poor or curious countries), such silly errors will decrease, but also create unlimited possibilities for more subtle errors. The extremely serious issue is that ChatGPT’s content cannot be fact-checked, and we have almost no paper trail. For content written by a normal writer, you can ask an expert and trace reference materials. However, in LLM, “facts” are processed through billions of tiny adjustments and quantitative variables, making it almost impossible to trace and verify sources.
No matter what, there are many writing templates (boilerplates) created by LLM that look like they were written by a real person. This can be a good thing as we won’t have to pay expensive hourly fees for lawyers to draft inheritance or divorce agreements, let LLM handle that. Another example is ChatGPT, which can be used as a more semantic and flexible search engine. Unlike current search tools, which mainly search for the input string, a semantic search engine can help explore good ideas. It has a “conceptual model of the world,” containing symbols of humans, places, objects, and events, representing goals, causal relationships, and something close to the human brain’s activity. However, it is just a tool, like a search engine, where you want useful information to appear. LLM can be very useful as long as it stops making up everything.
Of course, ChatGPT or LLM will have a profound impact on the way humans learn, acquire knowledge, and pursue their journey to become experts. However, Steven Pinker believes that it may be difficult to improve it like a storm. Clearly, just as the use of computers to aid human intelligence in the past, returning to the era of computing and data storage advances in the 1960s, search technology in the 1990s, and other advancements, the limitations of humans have been pushed higher. It makes us realize our limited memory or computational abilities, or recognize that our ability to sift through and digest large amounts of information is good but artificial brains can do much better. LLM operates differently from our brains so it will help humans understand the nature of intelligence more deeply. It will make us appreciate the cognitive abilities of humans when compared to the artificial intelligence systems that are trying to simulate the human brain. Among them, some things are done better, some things cannot keep up. The framing of “humans cannot be replaced by AGI, we are still the dominant species” is wrong. No one-way development of intelligence can surpass all minds on the planet (conceivable minds). We use IQ to measure the differences between individuals, but it cannot develop into something that can make right or wise decisions about everything (an everything-deducer), because knowledge or understanding of reality based on experience (empirical reality) is limited by what it can observe. There is no such thing as a perfect algorithm containing all knowledge (omniscient) and unlimited power (omnipotent) that synthesizes many types of intelligence, goals, and worlds.
By Quan Nguyen Ha