DeepMind AI beats humans in every game

An AI that can defeat human players in chess, Go, poker, and other games that require multiple strategies to win. This artificial intelligence, called “Game Student” (SoG), was created by Google DeepMind. The company says this is a step towards general artificial intelligence capable of performing any task with superhuman performance. The paper was recently published in Science Advances.

Shall we play the game?

Martin Schmid, who worked on artificial intelligence at DeepMind, now works for a start-up called Equalization Technology. He said the SoG model can be traced back to two projects. One of them is DeepStack, an AI developed by teams such as Schmid at the University of Alberta in Canada, and the first AI to beat a human pro in a poker match. The other is DeepMind’s AlphaZero, which beats the best human players in games like chess and Go.

The difference between the two models is that one focuses on imperfect knowledge games – where the player does not know the state of other players, such as the hand in a poker game, and the other focuses on perfect knowledge games, such as chess, where both players can see the position of all the pieces at any given time. These two require fundamentally different approaches. DeepMind hired the entire DeepStack team with the goal of building a model that could promote both types of games, thus giving birth to SoG.

According to Schmid, the SoG starts out as a “blueprint” for how to learn the game and then improve it through practice. This beginner model can then play freely in different games and teach yourself how to play against another version of yourself, learn new strategies and gradually become more capable. While DeepMind’s previous AlphaZero could adapt to the perfect knowledge game, the SoG can adapt to both the perfect and imperfect knowledge games, making it more versatile.

Researchers tested SoG on chess, Go, poker and a board game called Scotland Yard, as well as Leduc Poker and a customized version of Scotland Yard, and found that it could beat several existing AI models and human players. Schmid said it should be able to learn to play other games as well. “There are a lot of games that you can just throw at it, and it’s really, really good at it. ”

This wide range of capabilities is a slight drop in performance compared to DeepMind’s more specialized algorithms, but the SoG can easily beat the best human players in most of the games it learns. Schmid says that the SoG learns to go against itself in order to level up in the game, but also to explore what might happen from the current state of the game, even if it’s playing an imperfect game of knowledge.

“When you’re playing a game like poker, it’s hard to figure out how to find the best next move if you don’t know what cards your opponent has in hand. “So there’s some ideas from AlphaZero, and some ideas from DeepStack that form this huge mix of ideas, and that’s the game students.” ”

Michael Rovatsos of the University of Edinburgh in the United Kingdom, who was not involved in the study, said that while the research is impressive, there is still a long way to go before AI can be considered universal intelligence, because games are an environment where all rules and behaviors are clearly defined, not the real world.

“The important point to emphasize here is that this is a controlled, self-contained artificial environment in which the meaning of everything and the result of every action is very clear. “The problem is a toy problem because as complex as it can be, it’s not real.” (Source: China Science Daily Li Huiyu)

Related Paper Information:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button