AI EDUCATION: Why Are We Taking AI to the Edge?

547

Each week we find a new topic for our readers to learn about in our AI Education column. 

As we try to help people learn more about artificial intelligence, we constantly stumble on new, poorly defined terms that technology writers seem to assume that the rest of us will automatically understand. 

If you’re reading other writers’ work on artificial intelligence across the internet, you’re almost sure to come across the term “edge AI” or “edge artificial intelligence.” 

If you suspect this neologism has something to do with so-called edge computing, you’re right—and if you have little-to-no idea of what edge computing is, don’t worry, that’s what we’re here to help you with. 

What Is Edge Computing 

Edge computing moves data storage and computation itself closer to the source of the data. So if you’re a bank, it means that rather than using a data center and/or technology in a mainframe at the core of a network to store and process information, data is housed on-site and more of the computing is done in-house. 

Edge computing emerged due to a geographic issue—due to variables like climate and the non-homogenous distribution of power generation, many technology providers have found it useful to build data centers far from where the users creating and calling on the data may reside. So while a professional might be using a computer in New York or Chicago, the computations themselves might have been done near Midlands, Texas for its low energy costs, and the data itself might reside somewhere like Wyoming or Montana due to lower costs of doing business and cooler average temperatures. 

This introduced a problem: Latency. Latency describes the time it takes to respond to communications calling on data or computational power. The farther away a user is from their data and processing power, the longer it takes to get responses from technology. As computers themselves initiate more demands for data and processing, excessive latency threatens to gum up the works. 

Where Did Edge Computing Come From 

Edge computing is a relatively new term, only arising as more computing work is outsourced across existing networks and/or the cloud. With the rise of the cloud, resource-intensive computing work can be done at terminals distributed geographically around the world. Much like human labor, a computer’s labor can be outsourced to the lowest cost, highest efficiency locations. 

In the past, a major institution with heavy computing needs, like a hospital or center of education, would build a mainframe—in some ways the rough equivalent of what might colloquially be called a supercomputer—in its basement. Users at terminals in different areas of the institution would work on technology that looked like desktop personal computers, but most of the heavy lifting was being done in the basement. The interface was located throughout the institution, but the power was downstairs. 

As powerful computers shrank, however, this was no longer necessary, more computing could be done “locally” where the users were interfacing, and the need for a huge mainframe computer in the basement declined. Then came a bevvy of new technologies, like artificial intelligence and machine learning, that tended to rely on more computing power than was available to the average person, and processing power started to be outsourced again. While the move back towards outsourcing solves the pressing data storage and processing power problems many institutions face in implementing AI, it also introduces the latency issue. 

Living On The Edge 

Edge AI, then, is exactly what it sounds like: a combination of artificial intelligence and edge computing. Rather than referring to a centralized digital brain to solve problems, edge AI uses a technological peripheral nervous system to generate results. According to Red Hat, edge artificial intelligence “is the use of AI in combination with edge computing to allow data to be collected at or near a physical location.” 

Edge AI is key to healthcare and transportation applications of artificial intelligence technology, as it allows for information to be quickly processed and decisions to be made within tiny fractions of a second. Edge AI is what allows self-driving vehicles to be “aware” enough to respond to traffic in front of them stopping short, or for a surgical or medicall device to make tiny changes in its interventions to respond to real-time changes in a human body. 

Edge AI also means that, rather than relying on back-end computers to solve problems, the devices in the hands of users—like professionals and consumers—handle more of the heavy lifting. In edge computing, these devices serve as both collectors of data and processors of that data, creating more instantaneous feedback for users. As more localized sources of data emerge, like the so-called Internet of Things, there will be greater need for more localized computing. 

Why Are We Taking AI to the Edge? 

According to Red Hat, Edge AI has several advantages beyond its ability to deliver more immediate results, including: 

  • Lower energy needs. 
  • Lower bandwidth needs. 
  • Enhanced privacy and security. 
  • Enhanced scalability. 

IBM, in its own description of Edge AI, adds that the technology allows artificial intelligence to be applied when an internet connection might not be available, making technology like autonomous vehicles, wearable devices and smart home appliances fully functional when users are offline.