INFORMATION TECHNOLOGY

Snapshot multispectral imaging based on diffraction networks


The development of multispectral imaging has led to significant advances in environmental monitoring, astronomy, agricultural sciences, biomedicine, medical diagnostics, and food quality control. Traditional color cameras are the most commonly used and basic spectral imaging equipment, which collects information through three color channels: red (R), green (G), and blue (B).

Traditional RGB color cameras rely on a space-periodically arranged array of filters. The array consists of 2×2 pixels, each subpixel contains a specific color filter, selectively transmitting red, green, and blue color information while blocking the transmission of other channels. Although it is widely used in various imaging applications, it is difficult to obtain more spectral information by increasing the number of filters due to its low efficiency, serious spectral crosstalk, and poor color rendering quality.

Recently, Aydogan Ozcan’s team at UCLA used deep learning to design a diffractive multispectral imaging network to build a spectral filter array. The diffraction imaging network consists of 16 independent spectral passbands consisting of tightly arranged passive diffraction plates fabricated by space surface engineering. The network is compact with axial distances of about 50-100 wavelengths. Based on this feature, this thin optical element can convert existing image sensors and focal plane arrays into multispectral imaging systems and be used in microscopy, spectroscopy and remote sensing.

Figure 1: Multispectral imager based on diffractive optical networks for high-quality imaging performance and spectral signal contrast. This diffractive multispectral camera can transform monochrome image sensors into snapshot-type multispectral imaging devices without the need for traditional filters or digital reconstruction algorithms.

The article, published in Light: Science & Applications, is titled “Snapshot multispectral imaging using a diffractive optical network,” with Deniz Mengu as first author and Aydogan Ozcan as corresponding author. The study was led by Dr. Aydogan Ozcan, the UCLA Chancellor’s Professor and Volgenau Chair of Engineering Innovation and the HHMI Professor at the Howard-Hughes Medical Institute. Other authors of the work include Deniz Mengu, Anika Tabassum, and Mona Jarrahi, all from UCLA’s Department of Electrical and Computer Engineering. Professor Ozcan also teaches in the Department of Bioengineering and Surgery at UCLA and serves as Associate Director of the California Nanosystems Institute (CNSI).

This diffractive network-based multispectral imager is optimized by deep learning to distribute different spectral channels across different pixels on the output image plane. As a virtual spectral filter array, it can instantly generate two-dimensional spatial and spectral information of objects while retaining the spatial information of the input scene or object, without the need for image reconstruction algorithms. As a result, this diffractive multispectral imaging network can virtually convert monochromatic image sensors into snapshot multispectral imaging devices without the need for traditional filters or numerical algorithms.

In this paper, it is reported that the multispectral imager based on diffractive optical network can provide excellent spatial imaging performance and spectral signal contrast at the same time. The results show that the multispectral imager can achieve an average transmission efficiency of ~79% in different bands without significantly affecting the spatial imaging performance and spectral signal contrast of the system. (Source: LightScience Applications WeChat public account)

Related paper information:https://doi.org/10.1038/s41377-023-01135-0

Special statement: This article is reproduced only for the need to disseminate information, and does not mean to represent the views of this website or confirm the authenticity of its content; If other media, websites or individuals reprint and use from this website, they must retain the “source” indicated on this website and bear their own legal responsibilities such as copyright; If the author does not wish to be reprinted or contact the reprint fee, please contact us.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button