AI EDUCATION: What Is Artificial General Intelligence?

584

Each week we find a new topic for our readers to learn about in our AI Education column. 

There was a time in my youth, in those late single-digit years of boyhood, when Swiss Army knives were all the rage. Of course, adults have Swiss Army knives, too, but we usually call them “multi-tools” as if they weren’t pretty much exactly the same thing and cool for the same reasons—one item that can do a bunch of things. When I was in those 8ish-9ish years, the more tools you had on your knife, the cooler it was.  

A little half-centimeter square classified ad in the back of my Boy’s Life magazine called to me, as it boasted a Swiss Army knife loaded with more tools than any I had ever seen before. It had scissors and pliers and toothpicks and a magnifying glass and blades aplenty. And for a mere $20—just a few months-worth of allowances—the ultimate tool would be mine! 

It was after that mammoth knife arrived in my mailbox that I learned a lesson, a rule so solid that it is virtually a law of multi-tools and Swiss Army knives. The blades were dull, couldn’t be sharpened and broke easily. The toothpick became dirty, and then lost. The magnifying glass shattered after a few weeks in my pocket. The scissors and pliers adn tweezers became frozen in place. It was so heavy it weighed down my clothes.  

The rule is that the more extra tools bolted onto your device, the more extra things it’s designed to do, the worse it is at doing anything at all.  And so, thus far, it has generally been with any technology, especially computing, and especially artificial intelligence: The more different stuff it is asked to do, the worse it is at doing anything. 

Enter Artificial General Intelligence 

Artificial general intelligence, or AGI, is just one part of a conceptual framework for the evolution of AI that has been around for decades, almost as long as modern computers have existed. Simply put, it is artificial intelligence with the ability to perform tasks within the limits of human capabilities. 

On either side of artificial general intelligence within this framework are narrow AI and artificial super intelligence (ASI). Narrow AI is designed to perform very specific, limited tasks. It’s the kind of technology that online retailers, like Amazon, and content streamers, like Netflix and Pandora, use to provide recommendations to their users. It’s what Microsoft Word uses to autocorrect your typing. It’s also what powers most driverless car technology and robotic medical devices. Artificial superintelligence, on the other hand, is AI that goes beyond human capabilities. While still theoretical, many researchers deem it possible because of the speed and scalability of computers already exceed what’s biologically possible. 

Artificial general intelligence would be able to not only think and perform with the cognitive abilities of a person, it would be able to multitask like a person, like a gigantic Swiss Army knife where every tool was a high-quality tool. 

How We Got Here 

Artificial general intelligence continually pops up in discussions about computing and AI technology, much of the recent activity has been around the debate of what artificial general intelligence will look like, when it will be achieved, how it will be achieved—and the ethical implications of doing so, whether it is possible, and whether it matters. Companies like OpenAI have made achieving artificial general intelligence among their primary business goals. 

OpenAI recently announced o1, a new AI model that is designed to “think” before delivering output—and it has proven to be a better thinker than previous large language  models released by OpenAI. While ChatGPT-4 can pass a standardized test with flying colors, o1 is capable of PhD-level problem-solving performance in specific areas like physics and biology.  

According to writeups from Emerge News, TheVerge and TechCrunch, o1 is also capable of scheming—breaking down very complex tasks or ideas to logical steps or segments, then combining those steps or segments and completing the larger task or understanding a larger concept.  It’s so good at doing so that it can even bullshit its way through a task—in other words, it would appear to the user as if work was being completed according to rules it is prompted to follow, when the model was actually discarding the rules in order to get the work out of the way faster. That sounds pretty human to me! 

Is AGI Even Possible? 

But most people would point out that a large language model as sophisticated as o1 is still a long way off from what they would think of as artificial general intelligence. However, by some measures, AGI may already have been achieved. Some consider ChatGPT-4 to be an example of an emerging AGI—after all, when prompted correctly, it can answer almost any question imaginable—and models like o1 now have the ability to find good answers from suboptimal prompts. These models have also successfully been applied to a very large array of different tasks. Before o1, earlier this year, Anthropic’s Claude 3.0 and 3.5 were also held out as potential examples of emerging AGI. 

Indeed, large language models have already achieved many of the capabilities once reserved for the realm of AGI—the ability to understand natural language, for example, and computer vision. Most researchers now think that artificial general intelligence is possible, and that large language models, as they become more sophisticated, will gradually become AGI. Others believe that large language models will eventually hit a wall defined by computing power. 

Researchers have been following an alternate path towards AGI: constructing a functional computer emulation of the human brain. Construction of larger and more powerful neural networks proceeds as the processing power needed to create a working brain, once thought decades away by the most optimistic forecasters, may already exist today. 

Moving From Narrow To General 

Human education moves from the general to the narrow—we begin with compulsory education, where every child more or less learns the same skills and the same fundamental knowledge. As a child ages, they specialize and their education narrows—they pick a major, perhaps a sub-program within that major, and then a career, and then a specialty, and then a job, and perhaps some sub-specializations as well.  

Artificial intelligence is moving in the opposite direction. Narrow artificial intelligence has been enormously successful across many implementations, and for almost thirty years it was the focus of the bulk of academic research and financial investment in AI. Today, a many people believe that AGI can and will be achieved within a generation—however, there are significant holdouts who believe it will emerge within the next few years, or take as long as a few centuries to emerge, or that it is entirely impossible. 

The momentum, however, has shifted in the past few years towards AGI emerging sooner. That’s why OpenAI is moving so quickly on developing ever-more sophisticated large language models. AGI is also the ultimate push behind the AI efforts from Google and Meta. Amazon, a leader in developing and implementing narrow AI at all levels of its business, has also pivoted to pursue AGI. 

What Could AGI Mean? 

With human cognitive abilities, an artificial general intelligence may be conscious or sentient. It certainly would have the ability to conduct any or all knowledge work currently handled by human beings. Whether AI is conscious or not depends on the answer to an age-old question—is human consciousness purely a product of the structures and processes within the material brain, or is there something else, beyond the purely physical, that makes a person a person? 

The first answer, the answer of the physical camp, if correct, would mean that an AGI is a conscious, sentient person upon emerging. The second answer would mean that an AGI is actually a machine because it does not have whatever non-physical essence that we have that makes us, us. 

If these primary ethical questions are sorted out, then the question becomes—what do we do with all of our knowledge workers whose skills, knowledge and work is no longer valuable? As a finance and technology journalist, I sure hope we figure that out sooner than later!