OPTICAL NEUROMORPHIC COMPUTINGThe longrange interference of optics enables natural convolution and vector matrix multiplication. These can be used in neuromorphic compute systems as shown on the left. A strong experimental thrust to design, build, and demonstrate these proposed systems will serve as a first feasibility proof of using integrated photonics for scalable information processing. Key points of innovation include: 1) arbitrarily scalable Silicon photonic neural network architectures and the testbeds required to demonstrate them, 2) attojoule/MAC electrooptic modulator neurons deploying unitystrong optical index switching materials and plasmonic modes to tailor lightmatter interaction at the nanoscale, and 3) metrics required to simulate and design these systems and benchmarks required to orient them within the broader field of computing applications. The required nonlinearity for the perceptron algorithm is provided via the transfer function of our compact and aJ/bit efficient hybrid integrated photonic/plasmonic modulators. The energy efficiency of this photonic neuromorphic compute engine depends energyperbit function of the modulator neurons. The neural network cascadability from nodetonode is given by the SNRreduction ability of the nonlinear transfer function of the modulator neuron. This novel optical neuromorphic computer could deliver efficiencies of 10^9 GMAC/J, which is beyond the CMOS digital efficiency wall. 
ELECTROOPTIC ACTIVATION FUNCTIONSPhotonic neural networks benefit from both the highchannel capacity and the wave nature of light acting as an effective weighting mechanism through linear optics. Incorporating a nonlinear activation function by using active integrated photonic components allows neural networks with multiple layers to be built monolithically, eliminating the need for energy and latency costs due to external conversion. Interferometerbased modulators, while popular in communications, have been shown to require more area than absorptionbased modulators, resulting in a reduced neural network density. Here, we develop a model for absorption modulators in an electrooptic fully connected neural network, including noise, and compare the network’s performance with the activation functions produced intrinsically by five types of absorption modulators. Our results show the quantum well absorption modulator–based electrooptic neuron has the best performance allowing for 96% prediction accuracy with 1.7×10−12 J/MAC excluding laser power when performing MNIST classification in a 2 hidden layer feedforward photonic neural network.

ALL OPTICAL ACTIVATION FUNCTIONSEngineering datainformation processors capable of executing NN algorithms with high efficiency is of major importance for applications ranging from pattern recognition to classification. Our hypothesis is therefore, that if the timelimiting electrooptic conversion of current photonic NN designs could be postponed until the very end of the network, then the execution time of the photonic algorithm is simple the delay of the timeofflight of photons through the NN, which is on the order of picoseconds for integrated photonics. Exploring such alloptical NN, in this work we discuss two independent approaches of implementing the optical perceptrons nonlinear activation function based on nanophotonic structures exhibiting i) induced transparency and ii) reverse saturated absorption. Our results show that the alloptical nonlinearity provides about 3 and 7 dB extinction ratio for the two systems considered, respectively, and classification accuracies of an exemplary MNIST task of 97% and near 100% are found, which rivals that of software based trained NNs, yet with ignored noise in the network. Together with a developed concept for an alloptical perceptron, these findings point to the possibility of realizing pure photonic NNs with potentially unmatched throughput and even energy consumption for next generation information processing hardware.

PHOTONIC SPIKING NEUROMORPHICS FOR MIRROR SYMMERTY DETECTIONThe ability to rapidly identify symmetry and antisymmetry is an essential attribute of intelligence. Symmetry perception is a central process in human vision and may be key to human 3D visualization. While previous work in understanding neuron symmetry perception has concentrated on the neuron as an integrator, here we show how the coincidence detecting property of the spiking neuron can be used to reveal symmetry density in spatial data. We develop a method for synchronizing symmetryidentifying spiking artificial neural networks to enable layering and feedback in the network. We show a method for building a network capable of identifying symmetry density between sets of data and present a digital logic implementation demonstrating an 8x8 leakyintegrateandfire symmetry detector in a field programmable gate array. Our results show that the efficiencies of spiking neural networks can be harnessed to rapidly identify symmetry in spatial data with applications in image processing, 3D computer vision, and robotics.

PHOTONIC INTEGRATED CONVOLUTIONAL NEURAL NETWORKS (PCNNs)Neural Networks (NNs) have become the mainstream technology in the artificial intelligence (AI) renaissance over the past decade. Among different types of neural networks, convolutional neural networks (CNNs) have been widely adopted as they have achieved leading results in many fields such as computer vision and speech recognition. This success in part is due to the widespread availability of capable underlying hardware platforms. Applications have always been a driving factor for design of such hardware architectures. Hardware specialization can expose us to novel architectural solutions, which can outperform general purpose computers for tasks at hand. Although different applications demand for different performance measures, they all share speed and energy efficiency as high priorities. Meanwhile, photonics processing has seen a resurgence due to its inherited high speed and low power nature. Here, we investigate the potential of using photonics in CNNs by
proposing a CNN accelerator design based on Winograd filtering algorithm. Our evaluation results show that while a photonic accelerator can compete with currentstateoftheart electronic platforms in terms of both speed and power, it has the potential to improve the energy efficiency by up to three orders of magnitude. 
CNN USING FOURIER OPTICSThe 4F system uses low power laser light to perform convolution functions in parallel at high speed and resolutions, consuming very little power compared to the energy used by current CMOS based technology. It is employed as base unit in CNN, for making intelligent decision, classifications, pattern recognition, in big data challenging problems.
System is based on 2 stages of Fourier Transform (FT). A pattern is formed by shining laser light to an electronically configured Digital Mirror Device DLM 6500, which modulates the laser intensity, coding analog numerical data. At the Fourier plane, the light pattern convolves with a second image and subsequently transformed in the real space. This system can perform Fourier transform calculations 10 times faster than an Nvidia P6000 (100GPixel/s) graphics card (12 TFlops with a Max Power consumption of 250 W), commonly employed for highperformance computation, using just 25 percent of the power. 
IN MEMORY INTELLIGENT COMPUTINGIn contrast to electronics, integrated photonics can provide low delay interconnectivity which meets the requirements for nodedistributed nonVon Neumann architectures that can implement neural networks relying on dense nodetonode communication. Moreover, weighted addition (MAC) and vector matrix multiplication can be effortlessly and passively performed in photonics, leveraging on the wavenature of the signals, by means of phase modulation or wavelength division multiplexing using linear electrooptic modulators, MZI or microring modulators. Thus, once the neural network’s weights are set, after training, the delay in the network is given by the timeofflight of the photon, which for large network is in the picosecond range. However, the functionality of memory for storing the trained weights does not exists in optics, or at least in its nonvolatile implementation, and therefore requires additional circuitry and components (DAC, memory) and related consumption of static power, sinking the overall benefits (energy efficiency and speed) of photonics. Moreover, due to the absence of straightforward and efficient optical nonlinerities, the activation function is usually introduced in the electrical domain, magnifying the otherwise ps photontimeof flight latency of the network and compromising its cascadability into multiplelayers. Therefore, computing artificial intelligence tasks while transferring and storing data exclusively in the optical means is highly desirable because of the inherently large bandwidth, low residual crosstalk and high speed of optical information transfer.
We propose a prototype of a robust all optical neural network, based on the implementation of a node’s perceptron on a photonic integrated circuit (PIC) which exploits both thermal (nonvolatile) and optical (volatile) transitions of phase change materials (PCM) as weighting and activation functions, respectively. Deploying monolithic integration of GeSbTe (GST), which is a foundry processnear material (approved recently for Intel foundries), and exploiting its peculiar lightmatter interactions, we achieve storing of quantized perceptron weights and nonlinear volatile transfer functions, enabling sub ps latency (time of flight of the photon and cooling down of the GST after optical transition for avoiding heating and nonvolatile transition), or 10^12 MAC/s, with an efficiency of 10^17 MAC/J, which is 3 order of magnitude faster and several orders of magnitude more efficient than the state of the art GPU when performing inference 