Overview

Photonics

Unlike current Artificial Intelligence implementations using digital spatial convolutions, Look Dynamics’ Photonic Neural Net (PNN)  harnesses the ultimate parallelism of photons using optical Fourier transforms to enable processing of any digital data normally processed by CNNs. It offers much higher speed and power efficiency than even the fastest GPUs or custom Neural Network ASICs.

The PNN supports all existing CNN architectures and training methods. Except for the fact that they are calculated in a photonic Fourier space and are inherently more accurate, the convolutions are the same as those computed by traditional digital methods. Dedicated on-chip circuitry supports pooling, ReLU, thresholds, deconvolution flags and all other linear and non-linear operations to fully implement any AI architecture. Nothing to change and nothing to learn.

But there is more.

In addition to convolutions, the PNN is built on a silicon analog array device that implements super-speed array operations, combining nearly instantaneous processing with next-generation permutational algorithms to implement a wide variety of algorithms including Large Language Models, Transformers, SVM, RNN, CNN, Diffusion, and more. All can be configured on-the-fly.

The key differences between Look’s technology and current digital approaches are greatly improved speed, power, size, and latency. Reflecting a data plane off of a modulator is the fastest possible way to calculate a array operations including convolution, giving the PNN full resolution parallelism at the speed of light. Combined with its analog array architecture, where every data element is retained on-chip in the ideal location for the next stage, it is nearly 100% efficient.

A THOUSAND TIMES FASTER
A THOUSANDTH OF THE POWER

Performance

The PNN input data plane is up to 4K x 4K in “one chunk” and can be up to sixty-four layers deep in situ. Larger data blocks can be efficiently streamed from digital sources with a minor decrease in performance. The PNN’s execution speed is dependent on the Neural Net architecture for which it is configured.

For example, when processing imagery, a simple VGG-16 configuration executes in three microseconds while a Mask R-CNN + ResNet-152, with many more terms, takes about five microseconds.

LLMs will vary depending on complexity, but will still be 1,000’s of times faster than even the fastest GPU implementations.

Other architectures will yield similar results. In comparing performance, remember that these times are for full 4K resolution data planes. Total latency is typically around ten microseconds. All this with a single PNN module.

Power consumption is extraordinarily low since all of the “heavy lifting” convolutions and sums are completely analog full-frame photonic calculations. Typically the module consumes less than ten watts, regardless of the configuration architecture.

WHAT MULTIMODAL AI
WAS MEANT TO BE

Scalability

The Look Dynamics Photonic Neural Net can handle even the heaviest Data Center loads.

For AI computations, four Look Dynamics PNN Modules in a 1U rack tray with realistic I/O can achieve a peak performance of four exaFLOPS with a power draw of about 136 Watts including two 400G SR8 fiber transceivers. 

Six racks of PNN modules with their fiber connections comprise a zettaflops system when executing AI algorithms and would consume 34 KW All-In.

MASSIVE AI WITHOUT COMPROMISE
THE FUTURE OF MULTIMODAL AI

Contact

Technical:   Rikk Crill, CTO   rcrill@lookdynamics.com
Business:    David A. Bruce, CEO  dbruce@lookdynamics.com