INFORMATION TECHNOLOGY

Zhu Songchun’s team’s latest achievement: let the robot learn to observe the color


Can robots “evolve” the ability to “see and see colors”? The researchers’ answer is yes. On July 14, Science Robotics, an authoritative academic journal in the field of robotics, published the latest research results of Zhu Songchun’s team of well-known scholars in the field of AI – real-time bidirectional human-robot value alignment. The paper proposes an interpretable artificial intelligence (XAI) system that illustrates a computational framework for machines to understand human values in real time, and shows how robots can communicate with humans in real time to accomplish a complex set of human-robot collaboration tasks.

AI value alignment 1 .jpg

AI robot “reading values” image image image bigai courtesy of

An important foundation for human-robot collaboration

“Many tasks in life cannot be clearly described or directly expressed.” Zheng Zilong, co-first author of the paper and a researcher at the Beijing Institute of General Artificial Intelligence, gave an example: “For example, when a person buys clothes, the salesman picks a 4,000 yuan dress, and he shakes his head and says it is not appropriate; The salesman took another piece of 5,000 yuan, and he shook his head again; At this time, the salesman understood and took a dress that only sold for 800 yuan, at which time he nodded. At this time, the salesman has achieved “observing the color” and understanding the customer’s demand for the value of buying clothes: good quality and low price. In different scenarios, ensuring that the AI system can quickly and accurately identify the user’s value goals and align with the value of humans in real time is an important basis for human-computer collaboration, and our research team has made a breakthrough in this problem. ”

Today’s widely used artificial intelligence system is a passive intelligence, can only act mechanically in accordance with the tasks given by humans, lack of cognition and reasoning ability like humans, and lack of emotions and values like humans, in the absence of “heart”, artificial intelligence is difficult to understand human intentions, the implementation of human real concern value needs, naturally it is difficult to obtain human trust, difficult to integrate into human society.

“Our project focuses on human-robot collaboration to accomplish tasks.” Zheng Zilong introduced to China Science Daily: “As an executor, the agent not only needs to understand the meaning of the instructions of human users, but also needs to speculate on the intentions, purposes, and ideas behind the instructions, which are called human values/goals.” If the agent wants to enter thousands of households, it must understand human values and be able to ‘observe the color’. This process of understanding and converging into human values is ‘value alignment’. Only after value alignment can machines perform tasks more autonomously without relying on human instructions. ”

Zheng Zilong said that the experimental results also show that the value alignment mechanism can greatly enhance the trust relationship between human and machine in the process of collaboration, which is the only way to achieve general artificial intelligence.

The “values” generated by AI at this stage

Intelligent robots have no emotions and empathy, what kind of values do they produce?

“It’s a very good question.” Zhu Yixin, co-corresponding author of the paper and assistant professor of the Institute of Artificial Intelligence of Peking University, told China Science News: The current machine is not only lacking in “core”, but also lacking in “heart”, “Our work is to hope to take an important step to ‘establish a heart’ for the machine.” ”

Zhu Yixin explained. This important step in ‘establishing’ the machine is reflected in the task of aligning values. “Values are sometimes easier to describe, for example, I like to drink tea and I don’t like to drink coffee. However, in some situations, values are relatively difficult to describe, such as driving from A to B, you need to consider oil prices, high-speed tolls, scenery along the way, road traffic conditions and so on. Do I want to get there quickly and sacrifice the scenery along the way, or do I want to save the most money and see some beautiful scenery at the same time? The value function in this case is the weight of several factors. ”

In the paper “Real-time Two-Way Human-Machine Value Alignment”, they used a similar cooperative task to describe: values are the weight of factors such as execution time, size of the exploration area, and maximization of resource access.

Can humans create “AI whites”?

In the science fiction movie “Super Marines”, there is a “big white” intelligent companion robot, “big white” can accompany the movie male protagonist to learn, play, and make games, with a high degree of real-time interaction. And when the male protagonist of the movie is emotionally lost, “Big White” can also “read” his emotional value needs, take the initiative to comfort, and give a big hug.

Can humans create “AI white”? This is actually the current direction of robotic scientists.

After discovering the limitations of the “big data, small tasks” paradigm itself, Professor Zhu Songchun’s research team changed the track and committed to the exploration of the “small data, big tasks” paradigm. Zhu Yixin introduced that this study proposes an interpretable artificial intelligence system based on the instant two-way value alignment model. In this system, a group of robots infer the value goals of human users through instant interaction with humans and feedback from humans, while communicating their decision-making process to users through “interpretation”, allowing users to understand the value basis for the robot to make judgments. In addition, the system generates explanations that are easier for humans to understand by speculating on intrinsic value preferences in humans and predicting the best way to interpret them.

The research team verified the proposed computing framework through a series of experiments. Experimental results show that the learning model can improve the efficiency of human-computer collaboration in complex collaborative tasks, thereby improving the relationship between human-machine trust and realizing “autonomous intelligence”. At the same time, this achievement also reveals that artificial intelligence systems can learn the human value function in real-time communication and align current human value goals in real time.

From the “data-driven” transformation of traditional AI to “value-driven”, so that the XAI system understands human values, which in the research team’s view is “establishing a ‘heart’ for the machine”, which is a big step towards achieving the paradigm of “small data, big task”.

Zhu Songchun’s team has long been engaged in interpretable artificial intelligence related work. This is the team’s second paper on explainable artificial intelligence published in Science Robotics. This research covers multiple disciplines such as cognitive reasoning, natural language processing, machine learning, and robotics, and is a concentrated embodiment of the cross-cutting research results of Professor Zhu Songchun’s team. (Source: China Science Daily Zhao Guangli)

Related paper information:https://doi.org/10.1126/scirobotics.abm4183



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button