Light calculation enables the classification of objects passing through the scattering medium

Researchers at the University of California, Los Angeles have proposed a method to use all-light diffraction neural networks and broadband illumination to achieve object recognition through random scattering media. In this method, 20 illumination wavelengths are selected, and the object information distorted by the diffraction medium is mapped to the spectral intensity feature through the optical neural network, and the single-pixel camera detects and collects it. Experiments have shown that this method can successfully classify objects hidden behind unknown random diffraction media. This research will greatly contribute to the development of many application areas such as healthcare, communications, and aviation.

Object recognition through random scattering media is an important but challenging task with important implications for biomedical imaging, oceanography, information security, robotics, and autonomous driving. Although there have been many research methods to solve this problem, current technology mainly relies on the use of traditional computers for calculations, which requires a lot of computing resources and energy. And the performance of such methods on new scatterers still needs to be improved.

Recently, a research team at the University of California, Los Angeles (UCLA) has developed an all-optical method for classifying objects hidden behind scattering media through diffractive deep neural networks (D2NNs). Diffractive neural networks are an optical computing tool based on free space, which has attracted more and more research attention in recent years. The network consists of a set of carefully designed spatial structural surfaces, which can realize the regulation of the light diffraction process to complete specific computing tasks, and can be thought of as an all-optical computer that performs calculations at the speed of light. The all-optical computer has the advantages of fast computing speed, parallel computing and low power consumption, and is expected to play a role in traditional computing tasks such as object classification, quantitative phase imaging, and general linear transformation.


Single-pixel broadband diffraction neural networks classify handwritten digits by unknown random scattering media. A broadband single-pixel diffractive optical network maps spatial information from objects in an unknown random scattering medium to the spectral energy of the output pixel aperture. The classification score corresponding to the spectral energy reveals the type of input object behind the random scattering medium. Image Credit: Ozcan Lab @ UCLA.

The research paper, titled “All-optical Image Classification Using a Single-Pixel Diffraction Network with an Unknown Random Scattering Medium,” has been published in Light: Science & Applications, a leading journal of the Excellence Program. This method uses a diffraction neural network with broad spectral illumination to directly classify objects behind unknown scattering media using a single-pixel spectral detector. The broadband diffraction network selects 20 different wavelength signals to map the distorted object information of the scattering medium to spectral features that can be detected by a single-pixel detector. Randomly generated scattering media are used in the training process of the network to improve the generalization performance of diffractive optical networks. This deep learning-based network training process only takes one time, after which the resulting diffraction structure surface can be fabricated to form a physical single-pixel diffraction neural network and classify objects hidden behind a new unknown random scattering medium that has never been seen in training.

The simulation results show that the single-pixel broadband neural network achieves a blind detection accuracy of 87.74% in the task of classifying handwritten digits after unknown random scattering media. In addition, the researchers also used a 3D printed diffraction network and a terahertz time domain spectroscopy system to experimentally verify the feasibility of this single-pixel broadband classifier. The proposed optical computing framework can be scaled equally according to the wavelength of the illumination light, making it suitable for different electromagnetic wave bands without the need to redesign or retrain the diffraction surface.

The research was led by Professor Aydogan Ozcan, who serves as the UCLA Chancellor’s Professor, the Volgenau Chair of Engineering Innovation, and the Howard-Hughes Medical Institute (HHMI) Professor. Dr. Ozcan said, “This work demonstrates for the first time that objects after random scattering media can be classified using all-optical computation, which has the ability to generalize to a new unknown scattering medium. We believe this research will have a significant impact on the development of faster, more efficient and scalable object/image classification technologies through randomly scattered media, and advance a wide range of fields such as healthcare, biomedicine, communications, and even aerospace. ”

Co-authors of the work also include Bai Bijie, Li Yuhang, Luo Yi, Li Xurong, Ege Çetinta?, and Mona Jarrahi from UCLA’s Department of Electrical and Computer Engineering. Professor Ozcan also teaches in UCLA’s Department of Bioengineering and Department of Surgery, while serving as Associate Director of the California Nanosystems Institute (CNSI).

The research results were published in Light: Science & Applications under the title “All-optical image classification through unknown random diffusers using a single-pixel diffractive network.” (Source: LightScience Applications WeChat public account)

Related paper information:

Special statement: This article is reproduced only for the need to disseminate information, and does not mean to represent the views of this website or confirm the authenticity of its content; If other media, websites or individuals reprint and use from this website, they must retain the “source” indicated on this website and bear their own legal responsibilities such as copyright; If the author does not wish to be reprinted or contact the reprint fee, please contact us.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button