AI EDUCATION: What Is An AI Agent?

61

Each week we find a new topic for our readers to learn about in our AI Education column. 

Today on AI Education we’re going to discuss the artificial intelligence agent. Like most of the AI-terms we define and explore here for our readers, an AI agent, stripped to its core, is just another piece of software—but it is a very sophisticated piece of software. AI agents may be connected to physical architecture, either as a place where they reside or a robotic device that they operate. 

AI agents are designed to interact with their environment, collect and process data, and then use that data to perform tasks to achieve goals set by their end-users. These end users are typically thought of as human, but they do not have to be—they could be other programs or AI agents. Because they can perform their functions autonomously, they’re implemented to raise productivity, cut costs and create better user and customer experiences. 

The setting and the task define the AI agent. So, an AI agent operating in a hospital might be optimizing an operating room schedule, reading diagnostic tests to identify potential markers for disease or injury, or designing treatment regimens for oncology patients. An AI agent in a call center will be distributing calls efficiently and ensuring that as many customer calls reach resolution as possible, and so on. Agents learn as they go—not only are they “trained” as with most AI technologies, but they adapt to their tasks and their users as they are deployed over time. 

How We Got Here 

Anthropic recently launched Claude 3.5 Sonnet, the latest version of its major large language model. As part of the new release, Anthropic is allowing developers to use the artificial intelligence to “use” computers—also known as “computer control.”  

In other words, Claude 3.5 Sonnet can use a computer like a human. The new feature, technically in a public beta test, uses Claude’s AI to read computer screens, type, move a cursor around and click as if it were a human computer user. 

Unlike AI agents with very specific tasks, Claude 3.5 Sonnet and its successors will be able to be directed towards more generalized functions. Over time, they will be able to accomplish anything a human user could accomplish in front of a computer. 

How Do They Work 

Like most applications of emerging artificial intelligence, AI agents serve to simplify very complicated tasks by creating a rational, repeatable workflow. After an end-user gives the agent a goal to achieve, the agent identifies steps—a series of tasks—that it needs to complete to reach that goal.  

The agent then starts to seek the data it needs to complete its task. It may do so by reading a document or reviewing conversation logs, scanning  the internet, by contacting a user or customer directly to ask for the needed information, or by interacting with another AI agent. 

With the necessary information in hand, the AI agent begins to work through the tasks it has identified in logical order to reach the conclusion designated by its user, consistently checking its own work and seeking feedback to ensure it is progressing towards the desired outcome. 

AI agents can’t work without human interaction. According to IBM, there are three levels of human interaction that help define an AI agent’s behavior: interaction with the team of developers who design and train the agent, interaction with the team that deploys teh agent, and interaction with the user who actually provides the agent with its goals. 

Some Types of AI Agents 

Rule-Based 

Goal- or rule-based agents are designed to choose between several possible paths to an outcome, seeking the most efficient route towards an outcome. 

Simple and Model-based Reflex Agents 

Simple reflex agents operate on predefined rules and data and are unable to respond to events beyond those rules or consider data outside of their strict parameters. A model-based reflex agent, rather than being based on specific, predefined rules, is able to evaluate outcomes, consequences and probabilities before settling on a course of action. 

Utility-based Agents 

A utility-based agent, according to AWS, “uses a complex reasoning algorithm to help users maximize the outcome they desire. The agent compares different scenarios and their respective utility values or benefits. Then, it chooses one that provides users with the most rewards.” 

Learning agents 

Learning agents, per IBM, are distinct in their ability to learn as they go. New experiences are added to their initial knowledge base, which occurs autonomously. This learning enhances the agent’s ability to operate in unfamiliar environments. Learning agents may be utility or goal-based in their reasoning 

What Are the (Very Long Term) Implications 

As AI agents become more sophisticated and are able to perform more generalized tasks, like Anthropic’s Claude will be able to in the future as it is adopted more broadly, the way humans think, work and behave will have to change in response. This may lead to a whole new brand of thinking. 

Our financial services readers will probably be familiar with Daniel Kahneman and Amos Tverskys two-system model for human thought—system one is quick, tactical and responsive thinking useful in an immediate crisis, while system two is the longer-term, more patient thinking useful for strategy. 

A recent paper from an international team of researchers published in Nature Human Behavior proposes a third system: System 0, thinking that occurs due to human-AI interaction—thinking that occurs partially or completely outside of the human brain. This new, external thinking may signify permanent, significant changes to human cognition are to come. Our brains might run wild considering what, exactly, that could mean.