The PNN input data plane is up to 4K x 4K in “one chunk” and can be up to sixty-four layers deep in situ. Larger data blocks can be efficiently streamed from digital sources with a minor decrease in performance. The PNN’s execution speed is dependent on the Neural Net architecture for which it is configured.
For example, when processing imagery, a simple VGG-16 configuration executes in three microseconds while a Mask R-CNN + ResNet-152, with many more terms, takes about five microseconds.
LLMs will vary depending on complexity, but will still be 1,000’s of times faster than even the fastest GPU implementations.
Other architectures will yield similar results. In comparing performance, remember that these times are for full 4K resolution data planes. Total latency is typically around ten microseconds. All this with a single PNN module.
Power consumption is extraordinarily low since all of the “heavy lifting” convolutions and sums are completely analog full-frame photonic calculations. Typically the module consumes less than ten watts, regardless of the configuration architecture.


