Each week we find a new topic for our readers to learn about in our AI Education column.
Computers keep getting faster and faster with every passing day, but the universe has a speed limit that we can’t seem to break—approximately 300,000 kilometers per second, also known as the speed of light.
The question then becomes, as we accelerate our computers faster and faster, how close can we get to that universal speed limit?
Welcome to another AI Education, where this week we’ll talk about a technology that promises to help us approach the speed limit of physical computing power in our universe: optical computing.
Optical computing is related to a few topics we’ve covered since taking over this column in in 2024. The most closely related would be one of the first pieces we wrote for AI Education, on quantum computing, because it is mostly within the realm quantum computing that optical computing is being used. We’d also call your attention to a more recent piece on biological computing, in that we’re talking about a technology that takes the concept of a modern computer but moves it partially or completely off integrated circuits crafted from silicon wafers.
To clarify, this week we are not talking about video generation, computer vision, or the integration of visual data into AI-decision-making. When implemented, optical computing will reside on the infrastructure side of artificial intelligence.
What Is Optical Computing
Modern computers move energy through integrated circuits, tiny transistors whose gates work like on-off switches–the patterns in that energy are our data, or information, and the changes the computers make to that data. Those computers are incredibly fast—potentially as fast as a shifting electron. Like any energy, the energy itself that moves through a modern computer is capable of reaching the limits of the speed of light itself, if it is moving through a vacuum. But we’re moving it through material, in the case of a central or graphics processing unit (CPU or GPU), silicon. The energy has a lower physical speed limit when moved through matter, and a significantly lower speed limit when moved through solid matter. It’s worth mentioning, also, that moving energy through a solid material generates a lot of heat.
But we do have access to a medium that can not only transmit information at the speed of light itself, but can do so without needing to move through any matter at all, because it moves as both a wave and a particle. Light itself. We’re already using light to move data around, that’s what all the hubbub in recent decades has been about laying fiber optic cable. Fiber optic cable is pretty much as fast as it is straight—the less the light has to bend or bounce off the sides of its conductor, the faster it moves between endpoints. At this point, fiber optic cable is crisscrossing the globe. Optical computing takes that idea of using light to move data around and translates it into data processing.
Optical computing, sometimes known as photonic computing, uses photons—light—instead of electricity, which do not need a physical medium to move through. An optical computer, then, is a mechanism which computes not by moving electrical energy through silicon, but light through air or a vacuum to tiny sensors, which are capable of conveying the same on-off information as the transistors on an integrated circuit—but more.
Here’s Where Quantum Computing Comes In
Recall that transistors which rely on binary bits to process, record and report information, bits read as ones, or the on state, and zeros, or the off state. Quantum computers use quantum bits, or qubits. Like a binary bit, a qubit can store either a one or a zero, but it can also be a weighted combination of one and zero at the same time—a third “quantum” state that not a single third state, but a spectrum of possible states. Qubits can store and convey a lot more information than binary computing because of this third state—but, like the fastest optical computers, they require vacuum or near-vacuum conditions for storage.
Optical computers are capable of binary computing, however, the optical sensors being used in optical computing are also capable of sensing states between on and off—meaning optical computers are able to leverage qubits. Traditional computers have to move information from where it is stored or entered, through a computational device, and back to where it will be stored or reported—while optical computers are potentially able to compute without moving the information around.
Like biological computers, optical computing sounds a little science fiction-esque to us, but like biological computers, it already exists and is being deployed. Like biological computers, optical computing isn’t necessarily bound by Moore’s Law, which has seemed to define and limit the growth of traditional computing technology. Also like today’s biological computers, optical computers are typically paired with traditional silicon integrated circuits—with the silicon and optical components passing certain data and functions back and forth to maximize efficiency and minimize heat.
And Here’s Where AI Comes Back In
Data processing and storage create a lot of the limits on the expansion and proliferation of artificial intelligence. There are only so many data centers we can build, power and cool with the natural resources at our disposal. AI is already coming up against significant public concern regarding its energy and water use, even as more of the public embraces everyday use of AI, and we find more ways to derive value from implementing AI. Something has to give. There‘s going to be more AI, surely, so we’re going to have to build better infrastructure The pressure is on to create more powerful data centers while making them more energy and heat efficient.
It is believed that optical computing technologies have the potential to be more heat and energy efficient than traditional binary computers. In fact, they may turn out to be efficient enough to democratize access to quantum computing.
As neural networks become more powerful, they also become larger and more complex. they’re going to require faster and more efficient technology to continue to store, move and process information. Subsequent generations of AI—including artificial general intelligence and artificial superintelligence—will surely need to rely on more advanced processing and energy infrastructure than is currently available.






