In recent years, inspired by the insect compound eye, the artificial bionic compound eye has attracted more and more attention in overcoming the limitations of existing imaging devices such as large, bulky and heavy, and improving the visual performance of medical endoscopy, panoramic imaging, micronavigation and robot vision with its unique optical imaging solutions such as small size, distortion-free imaging, wide field of view and high-sensitivity motion tracking ability.
The existing artificial bionic compound eye manufacturing mainly faces two challenges: one is that the existing process is relatively complex, the manufacturing efficiency is limited, and it is difficult to meet the commercial standard, and the other is that the curved surface characteristics of the artificial bionic compound eye do not match the commercial plane imaging sensor, which makes it difficult to integrate the two.
In view of this, the research group of Professor Chen Qidai of Jilin University proposed a wet-assisted holographic laser processing method, which greatly improved the processing efficiency of the artificial bionic compound eye by customizing the preparation of the bionic compound eye and large-area transcription, and combined with the artificial intelligence method, it solved the difficulty of mismatch between the artificial bionic compound eye and the plane imaging sensor from the algorithm.
The research results were published online in Light: Advanced Manufacturing under the title of “Holographic laser fabrication of 3D artificial compound μ-eyes”.
Figure 1: Flow diagram of wet-assisted femtosecond laser parallel fabrication of artificial bionic compound eye
In the experiment, a femtosecond laser without space light field modulator (SLM) modulation was used to expose the surface of the quartz substrate, and the compound eye main lens was formed by wet etching, and then the femtosecond laser beam was split by SLM and combined with wet etching to realize the parallel processing of multiple small eyes in the compound eye, and the polydimethylsiloxane (PDMS) micro-nano structure transcription technology was used to realize the large-scale production of compound eye microlens arrays. The compound eye microlens array prepared by this method has the characteristics of high resolution and wide field of view. In order to overcome the problem that artificial bionic compound eyes are difficult to integrate with planar cameras, high-quality image reconstruction was achieved by using Generative Adversarial Network (GAN), which laid a foundation for future device integration.
Figure 2: Large-scale fabrication sample of an artificial bionic compound eye
Complex optics manufactured by holographic laser processing technology are scalable. To address the complexity and time-consuming nature of the process, Figure 2 illustrates the mass production of polydimethylsiloxane (PDMS) soft miniature optical components using quartz glass-based fly-eye microlenses as hard templates. In this process, the microoptics maintain a high surface quality (Figure 2a for scanning electron microscopy and Figure 2b for 3D depth of field images).
Figure 3: Image reconstruction based on Generative Adversarial Network (GAN) deep learning algorithm
The curved profile gives the compound eye a large field of view, but at the same time limits its focus position, which can only be positioned in a curved focal plane. For a true biological compound eye, there is an optical fiber that picks up light and directs it directly into the retina. However, this is difficult to be compatible with current sensor programs and to integrate optics and detectors on the chip. Theoretically, the parameters of each lens, including height, curvature, and focal length, should be redesigned, but it is difficult to determine the plane based on its position on the curved profile. To this end, we propose a deep learning algorithm based on Generative Adversarial Network (GAN) for image processing. In this study, we utilize two neural networks to maximize the generative power of the discriminator and minimize its loss function, while the discriminator is trained to maximize its loss function. As shown in Figure 3a, the neural network can be trained to perform image restoration of all eyes using the image shown in Figure 3c. Image restoration is independent of incident wavelength, material refractive index, or singlet thickness. With this technology, fly-eye imaging can preserve a large field of view and significantly improve image quality, making it suitable for a wider range of application scenarios (Figure 3B).
In this study, a method for the preparation of artificial bionic compound eyes by efficient femtosecond holographic laser was proposed, and an artificial intelligence method was introduced to reverse the image reconstruction, which solved the pain point of low manufacturing efficiency of artificial bionic compound eyes, and laid a foundation for the matching and integration of artificial bionic compound eyes and planar imaging sensors in the future. (Source: Advanced Manufacturing WeChat public account)
Related Paper Information:https://doi.org/10.37188/lam.2023.026
Special statement: This article is reproduced only for the purpose of disseminating information, and does not mean that it represents the views of this website or confirms the authenticity of its content; if other media, websites or individuals reprint from this website, they must retain the “source” indicated on this website, and bear their own legal responsibilities such as copyright; if the author does not want to be reprinted or contact the reprint fee and other matters, please contact us.