AI EDUCATION: What Is Functional AI?

37

Each week we find a new topic for our readers to learn about in our AI Education column. 

This week in AI Education we’re going to take another view of fundamental artificial intelligence knowledge. In the past, we’ve sliced up the AI universe by sophistication and capability, with three levels of artificial intelligence: Weak (or narrow) AI, Strong AI or Artificial General Intelligence (AGI), and artificial super intelligence or Super AI. As we’ve discussed, all AI currently in use is weak AI, that is, all artificial intelligence is sub-human intelligence, and both AGI and Super AI are as yet theoretical and unrealized. 

There’s a different way to organize and think of artificial intelligence technology, and that is by functionality. In researching this article, we found that most publications around the web—likely citing the same sources—organize AI along four different functional categories, with two of them already existing in the form of weak AI, and two of them being purely theoretical and residing in the realm of strong and super AI. 

I think those of us who use and write about AI, without knowing this information, intuitively use some of these functional categories anyway. There’s a clear difference in the technology that makes recommendations on our online shopping apps and the general knowledge underpinnings of something like a ChatGPT, right? That’s kind of how we got here this week.  

Anyway, the four functional categories of artificial intelligence are: reactive machine AI, limited memory AI, theory of mind AI, and self-aware AI. 

What Is Reactive Machine AI? 

Reactive machines are exactly what they sound like—they operate within specific tasks and within specific interactions and carry no memory or knowledge from one task or interaction to another. This is the most rudimentary application of weak or narrow AI, one that has already proliferated throughout the economy. 

Reactive machines are designed to be specialists—not only do they perform specific tasks and within narrowly defined parameters, but, due to their rules-based nature, they’re also designed to be predictable. they are, however, capable of responding in real time to various stimuli, be they input or commands from a human user or changes sensed in an environment. 

Real-world examples of reactive machines include most game-playing AI, starting with IBM’s Deep Blue chess-playing computer from the 1990s, all the way to the AI-powered computer opponents within today’s video games. Modern traffic management systems are also usually powered by reactive machine AI. Reactive machines are behind e-mail and messaging spam filters. Recommendation engines, like the kinds used by Netflix and Amazon, also rely on reactive machines. 

What Is Limited Memory AI? 

Newer and still emerging and proliferating, limited memory represents a step beyond reactive machine AI in that it has some level of memory, allowing it to retain information between interactions and tasks, learn from past experiences and build knowledge or skill. Retaining information allows limited memory machines to make better predictions or decisions over time. As a result, limited memory AI is capable of making what amount to informed decisions. 

Limited memory machines are capable of monitoring events over time, measuring change and making inferences from those changes. They also improve their performance over time. However, limited memory AI lacks the ability to retain specific information over the long term—it does not keep a complete record of its past experiences. 

Examples of limited memory AI include generative AI tools like ChatGPT and Bard. Chatbots like Siri and Alexa are also usually powered by limited memory AI in combination with natural language processing. Autonomous vehicles rely on limited memory AI to make decisions based on what they see in the world around them using computer vision. Today, limited memory AI is also being used to power smart homes and security systems, due to its ability to learn user preferences and to monitor an environment over time. 

What Is Theory of Mind AI? 

Now we’re moving into the realm of science fiction, since theory of mind (ToM) AI is a type of artificial general intelligence that does not yet exist. We’re already making huge leaps in neural networks that mimic the brain—mimicking all the things that compose our concept of “mind” is a completely different ballgame. Theory of mind AI would be able to understand the mentality of others, be they human beings, other organisms, or other artificial intelligences. This type of AI would also be able to mimic and respond to others’ mental states. 

Theory of mind AI would be able to interpret, through interactions, users’ emotions and beliefs, as well as their intentions and perspectives. It would also to some extent be able to anticipate those states in the agents it interacts with and predict their choices and actions. Theory of mind AI would appear to express empathy, whether it could be said to truly feel empathy or not. 

I view theory of mind AI on a continuum, as AI developers are continually working to instill at least some aspects of the mind into AI. Chatbots, for example, would be a lot less frustrating to work with if they were to develop theory of mind. Self-driving cars would be safer if they could better anticipate the behaviors of the other humans and machines on the road. Theory of mind is also probably essential to AI penetrating deeper into sensitive fields like medicine and finance. We’re starting to see what could be the early shoots, the spring crocuses, if you will, of what could be considered theory of mind AI in some specific applications.

What Is Self-Aware AI? 

We’re really into science fiction now, as truly self-aware AI would have to fall into the definition of artificial super intelligence, or super AI, AI which exceeds human capabilities. Self-aware AI would not only understand the behaviors, emotions and intentions of others around it, but it would also have an awareness and understanding of itself, with its own set of emotions and thoughts. It would care about what it did, or did not do. It might have guilt, or regret, or annoyance, or displeasure, or anger, or, well, you get the point. 

If self-aware AI were ever to come into existence, it would be a conscious being with an identity, not a gopher or chat interface for business or pleasure. As it stands, popular science fiction has offered many examples of self-aware AI in literature, movies and television and the philosophical debates and potential dangers such a creation would touch off. We won’t go down that alley here.