Spiking neural networks based on event-driven vision sensors

On November 14, 2023, Beijing time, Chai Yang’s team from the Hong Kong Polytechnic University and He Yuhui’s team from Huazhong University of Science and Technology published a research paper entitled “Computational event-driven vision sensors for in-sensor spiking neural networks” in Nature Electronics.

Dynamic motion generates a large amount of redundant data, and video processing for dynamic motion requires a lot of computing power, and neuromorphic event-driven image sensors only capture dynamic motion in the scene and then transmit it to the computing unit for motion recognition. However, this sensor-calculator separation architecture results in time delays and increases energy consumption.

Based on the previously proposed In-Sensor Computing architecture, the authors design spiking neural networks based on computational event-driven vision sensors, which can capture and directly convert dynamic motion into programmable, sparse, and information-rich pulsed signals. These sensors can be used to build spiking neural networks for motion recognition, reducing redundant data during perception while eliminating the need for data transmission between sensors and computing units.

The corresponding authors of the paper are Chai Yang and He Yuhui, and the first author is Zhou Yue.

Prof. Chai Yang’s research group proposed an in-sensor computing architecture for information processing at the sensor end (Nature, 2020,579, 32-33; Nature Electronics, 2020, 3, 664-671), and demonstrated increased contrast in still images (Nature Nanotechnology, 2019, 14, 776-782), visual adaptation in very dark and strong background light (Nature Electronics, 2022, 5, 84-91; Nature, 2022, 602, 364), and feature extraction for dynamic motion, among others (Nature Nanotechnology, 2023, 18, 882-888).

Dynamic vision sensors that already exist on the market today require complex pixel circuitry to detect changes in light intensity, which are then converted into digital signals to generate a series of digital events containing pixel addresses, polarity of light intensity changes, and time information. These analog signal processing results in a temporal resolution in the order of tens of microseconds. The processed digital signal still needs to be frequently transmitted to the back-end spiking neural network, further limiting the temporal resolution to the order of milliseconds. In this paper, the sensor-computing fusion scheme performs the calculation directly in the sensor, and the delay of its motion recognition is limited to the response time of the device, so it has ultra-high resolution at the microsecond level, which can achieve fast dynamic recognition. Limited to current test equipment, which can only test the temporal resolution in the microsecond range, the theoretical response speed of the device can reach the nanosecond level.

The authors designed an event-driven pixel cell that can generate a programmable pulse signal when the light intensity changes. By adjusting the photoresponsivity of the photodetector, the polarity and amplitude of the pulse signal generated by the pixel unit can be effectively adjusted, so as to simulate different synaptic weights. The first author, Zhou Yue, designed a nonvolatile photodiode device, whose linear relationship between illumination intensity and photocurrent makes it more suitable for sensing and computing fusion, and its photovoltaic effect makes the light response speed greatly improved compared with phototransistors. By adopting the floating gate layer design, the charge can be stored in the floating gate layer over a long period of time, so that different light responsivity can be achieved even without external voltage bias, which can effectively reduce energy consumption, and can be applied to larger neural networks without worrying about external bias applied to the traces. Zhou Yue and the members of the research group conducted circuit-level photoelectric tests, and Fu Jiawei, a student from the research group of Professor He Yuhui of Huazhong University of Science and Technology, carried out network-level action recognition simulation, and used the self-made action dataset to verify the potential of the scheme to achieve large-scale sensory-computing fusion. (Source: Web of Science)

Related Paper Information:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button