A new optical system for neural networks has been developed by the Max Planck Institute, offering a simpler and more energy-efficient alternative to current methods. This system uses light transmission to perform computations, reducing the complexity and energy demands associated with traditional neural networks.
Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. The researchers at the Max Planck Institute for the Science of Light published their new method on July 9 in Nature Physics, demonstrating a method much simpler than previous approaches.
Machine learning and artificial intelligence are becoming increasingly widespread with applications ranging from computer vision to text generation, as demonstrated by ChatGPT. However, these complex tasks require increasingly complex neural networks; some with many billion parameters.
This rapid growth of neural network size has put the technologies on an unsustainable path due to their exponentially growing energy consumption and training times. For instance, it is estimated that training GPT-3 consumed more than 1,000 MWh of energy, which amounts to the daily electrical energy consumption of a small town.
This trend has created a need for faster, more energy- and cost-efficient alternatives, sparking the rapidly developing field of neuromorphic computing. The aim of this field is to replace the neural networks on our digital computers with physical neural networks. These are engineered to perform the required mathematical operations physically in a potentially faster and more energy-efficient way.
Optics and photonics are particularly promising platforms for neuromorphic computing since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds only limited by the speed of light. However, so far, there have been two significant challenges: Firstly, realizing the necessary complex mathematical computations requires high laser powers. Secondly, the lack of an efficient general training method for such physical neural networks.
Both challenges can be overcome with the new method proposed by Clara Wanjura and Florian Marquardt from the Max Planck Institute for the Science of Light in their new article in Nature Physics.
Simplifying neural network training
“Normally, the data input is imprinted on the light field. However, in our new methods we propose to imprint the input by changing the light transmission,” explains Florian Marquardt, Director at the Institute.
In this way, the input signal can be processed in an arbitrary fashion. This is true even though the light field itself behaves in the simplest way possible in which waves interfere without otherwise influencing each other. Therefore, their approach allows one to avoid complicated physical interactions to realize the required mathematical functions which would otherwise require high-power light fields.
Evaluating and training this physical neural network would then become very straightforward: “It would really be as simple as sending light through the system and observing the transmitted light. This lets us evaluate the output of the network. At the same time, this allows one to measure all relevant information for the training,” says Clara Wanjura, the first author of the study.
The authors demonstrated in simulations that their approach can be used to perform image classification tasks with the same accuracy as digital neural networks.
In the future, the authors are planning to collaborate with experimental groups to explore the implementation of their method. Since their proposal significantly relaxes the experimental requirements, it can be applied to many physically very different systems. This opens up new possibilities for neuromorphic devices allowing physical training over a broad range of platforms.