Technology

Current Convolutional Neural Networks come in a variety of sizes and have rapidly evolving architectures, but under the hood, the vast majority of their computing cycles are still dedicated to convolution.

What if there were a better way?

A way that…

      • was hundreds of times faster and used a hundredth of the power?
      • supported even the latest, most advanced CNN architectures
      • supported Training or Inference equally well?
      • seamlessly combined the flexibility of a fully-programmable digital system with the inherent advantages of processing with Light

 

Meet Look Dynamics’ PHOTONIC CONVOLUTIONAL NERUAL NET

IT CHANGES THE EQUATION

Photonics

Unlike current Convolutional Neural Net implementations using digital spatial convolutions, Look Dynamics’ Photonic convolutional Neural Net (PNN)  harnesses the ultimate parallelism of photons using optical Fourier transforms to enable processing of any digital data normally processed by CNNs. It offers much higher speed and power efficiency than even the fastest GPUs or custom Neural Network ASICs.

The PNN supports all existing CNN architectures and training methods. Except for the fact that they are calculated in a photonic Fourier space and are inherently more accurate, the convolutions are the same as those computed by traditional digital methods. Dedicated on-chip circuitry supports pooling,
ReLU, thresholds, deconvolution flags and all other operations to fully implement any CNN architecture. Nothing to change and nothing to learn.

The key differences are speed and power. Reflecting an image off of a modulator is the fastest possible way to calculate a convolution. The PNN has full-frame parallelism at full resolution at the speed of light. Combined with an architecture where every data element is retained on-chip in its ideal location for the next stage, it is nearly 100% efficient.

A HUNDRED TIMES FASTER
A HUNDRETH OF THE POWER

Performance

The PNN input image of 1920×1080 can be up to sixty-four layers deep, supporting everything from grayscale (1 layer) to RGB (3 layer) to hyperspectral (up to 64 layer). The PNN’s execution speed is dependent on the Neural Net architecture for which it is configured. For example, a VGG-16 configuration executes in fifty-three microseconds while an Inception-Resnet-V2, with many more terms, takes about three milliseconds.

Other architectures will yield similar results. In comparing performance, remember that these times are for full 1920×1080 (HD) resolution images. Total latency is typically around one millisecond. So a VGG-16 job will see an answer in a little over one millisecond while an Inception-Resnet- V2 job will see a total latency of a little over four milliseconds. All this with a single PNN module.

Power consumption is extraordinarily low since all of the “heavy lifting” convolutions and sums are completely analog full-frame photonic calculations. Typically the module consumes less than five watts regardless of the configuration architecture.

WHAT A NEURAL NET
WAS MEANT TO BE

Scalability

The Look Dynamics Photonic Neural Net can handle even the heaviest Data Center loads.

For convolutional neural net computations, four Look Dynamics PNN Modules in only 1U of rack height are the equivalent of over thirty-two Nvidia V100 GPUs. One full rack of 168 PNN modules consuming 1.7 kW is the equivalent of almost forty-nine racks of Nvidia V100 GPUs consuming 2.1 MW.

The power and floor space savings are obvious. A single rack and a common wall outlet will handle over 300,000 HD images per second.

But consider what this computational capacity means to the User.

Each PNN Node…

    • is individually addressable through Ethernet 100G or Infiniband-EDR.
    • can run different user-configurable CNN architectures per-image on-the- fly.
    • has about a one millisecond latency for the most demanding applications.

CONVOLUTIONAL NEURAL NETS

WITHOUT COMPROMISE

Contact

info@lookdynamics.com

105 S. Sunset St., Suite T
Longmont, Colorado 80501

303-588-1442