Next-Generation Computing Finally Sees Light
Nov. 17, 2022
Contributor: Thomas Ferreira de Lima, Researcher, Optical Networking & Sensing
In the US alone, there are millions of miles of fiber optic cables in buildings, factories, and cities, along our highways, and connected to our homes.
Light moving inside these cables travels at over 124 thousand miles per second. When talking on a phone to someone on the other side of the world, watching live cable television, or downloading a movie on Netflix, you’ve already experienced the real-time delivery of data light traveling inside fiber optics.
Today, most data that travels through these fiber optic cables is processed using the same traditional computation platforms that process most of the world’s data, a physical microprocessor crunching massive amounts of zeros and ones.
This von Neuman architecture, which includes a processing unit, control unit, memory, and storage, has formed the basis of computing since the 1940s. Since then, computers have enjoyed an exponentially growing trajectory in terms of cost, speed, and memory capacity.
However, the limits of traditional platforms are being reached by the rate the world is creating data today and the speed at which we expect data to be processed.
In 1965 Gordon Moore, co-founder of Intel, predicted that the number of transistors would double every year for the next decade. In 1975, he updated his prediction to every two years. While these predictions held true for decades and powered tremendous semiconductor and consumer product growth, Moore’s law is now dead, depending on whom you ask. Computer engineers can no longer rely on advances in silicon fabrication to propel future speed gains. Moreover, they need to deal with mounting resource scarcity — the recent supply shortage of chips has limited our ability to manufacture cars, consumer electronics, and everything in between.
To address these issues, the IT industry has started developing a range of fit-for-purpose accelerators – most recognizably GPUs – and exploring alternative computing technologies. Quantum computing is another option that targets problems that cannot be tackled with processors based on classical physics. There have also been advances in using synthetic DNA as a computation platform. While none of these technologies are designed to replace traditional computing for all workloads, they can process application-specific workloads faster while consuming less energy.
Interest in photonic computing, or optical computing, is quickly gaining momentum as another interesting fit-for-purpose alternative. For decades, photons, or light, have shown promise to carry higher bandwidth information compared to electrons traveling through traditional circuit boards, potentially offering a solution to workloads facing bottleneck issues today.
Today, we can already use the light traveling inside fiber optic cables as sensors that measure vibrations, sound, temperature, light, and pressure changes. In many cases, we can repurpose the millions of miles of fiber optic cables already in use. We’re now developing the means to take this to the next level with photonic computing by creating the ability to process data already in fiber optic cables.
Another potential application area that could benefit from photonic computing is to process the explosive growth in data generated by machine-to-machine applications. For example, today, the average electric vehicle has over 3,000 chips in it, most of which process data from one sensor to instruct another computer component to take a specific action. In the future, some of these processes could be implemented using photonics, which will provide faster reaction time, reduce energy consumption and improve range in battery-powered vehicles.
Another potential application area could be increasing the ability of surgeons to perform complicated surgeries remotely. These complex human-machine interactions require high-speed and low-latency connectivity. Optical networks and photonic processors promise to deliver on these metrics while consuming low energy.
We are very much still in the early days of photonic computation. At NEC Labs America, we are developing approaches to make photonic computing a commercially viable option with an initial focus on “edge computing.” At the edge of the network, traditional microprocessors are starting to experience issues maintaining bandwidth and latency while servicing growing data-intensive applications, especially Artificial Intelligence.
The plan for this next-generation hardware is to process high bandwidth analog signals with low latency, enabling new edge applications that are prohibitive using current silicon chips. With these advantages, photonic neural networks can be deployed in low-latency Edge AI for real-time applications with high bandwidth data. Other application areas include human-assisted self-driving, remote operated fleets, and coordinated flight and reconnaissance for disaster recovery.
One thing is clear. In the near future, alternatives to traditional computation architectures will be necessary to economically process the explosive amount of data being generated by both humans interacting with computers and machines interacting with each other. Our research in photonic computing offers a compelling solution to the challenges faced by these application areas. We also expect this fundamental computing fabric to unlock new application areas the industry has yet to explore.
NEC Labs America has a proven track record of developing cutting-edge technologies focused on advancing humanity by making the world more efficient, receptive, and responsive to each of us, our communities and society at large. Photonic computing is a key research area in continuing to achieve this goal.