In this clear and interesting article by Rosita Rijtano appeared on Repubblica (https://bit.ly/2uiCCnB) it’s explained how AI defeated 4 out of 5 times human players in a popular videogame named Dota 2.
The beauty of the news is that the system learned from scratch, playing an equivalent of 180 years of games every day for a few months, through reinforcement learnings algorithms.
The analysis performed by video game experts about the root causes of this performance basically highlights that:
- The system can evaluate in real time 1000 possible moves (humans, of course, are more limited in the computations)
- The AI system adopts full team playing, preferring collaborative solutions learned through the reinforcement learning phase
- The systems do not have an ego, so it prefers optimal collective solutions to individual optimal solutions, e.g. the individual or ego-driven targets are not part of the winning strategies
This is perfectly in line with our ground theses:
- Collaboration is better than competition for better performing in any context;
- Ego is generally speaking a limitation for human kinds because the search for individual affirmation is in contrast with the optimal status of a community
- A combination of human and artificial intelligence will embed feelings and judgment in a powerful new intelligence
In a future post (not far away in time) I will try to describe a possible scenario that moving from simulation (or video gaming) environments will allow AI to impact on real-world human behaviors.
Stay in touch.
English translation from Google Translate and not reviewed for full proficiency:
Play 180 years of games a day: so the artificial intelligence beat the man to Dota 2
Five neural networks have defeated teams of high-level amateur players. The next confrontation in July, against the best professionals in North America
of ROSITA RIJTANO
All of us have already exterminated. At least in a video game. Four to one: so ended the first games that saw five artificial bits of intelligence challenge teams composed by some of the best amateur players of Dota 2: video games in which two teams face each other on a battlefield with the aim of defending the own fortress and conquer the adversary. Between sticks and muzzled clubs, nothing could have mortal fingertips against OpenAI Five: the group of neural networks developed by OpenAI, a non-profit co-funded by Elon Musk that aims to develop learning systems automatic benefits, if nothing else in reality.
The next comparison in July against the most quoted professionals in North America and then in late August, when they will fly to Canada for the “The International” show. So, “face-to-face” with global excellence, the software skill will be put to the test. In the meantime, what counts is the defeat: another defeat for mankind, forced in recent years to suffer repeated humiliations by the robots. It is history: in 1996 the supercomputer IBM traced the road, ousting Garri Kasparov, the king of chess. In 2016 AlphaGo, DeepMind’s program, left the world champion of Go Lee Sedol in tears, defeating him during a live TV broadcast followed by hundreds of thousands of people. Software developed by Carnegie Mellon University has defeated four of the world’s strongest poker players, winning $ 1.7 million.
Now, even more, the field of e-sports is immune to the advance of intelligent machines. There is no shortage of ambition and, according to experts, they are destined to fight in all fields within the next 45 years: to take away not only work, replacing the stove, not just creativity, painting and writing poetry in our place but fun too.
Of course, this time it was not easy to beat. Every player of Dota 2, at any time, has a thousand possible moves available against 35 of the chess and 250 of Go. For this OpenAI Five has had to face a hard, and intensive, training that has led to acquire in some month the correspondent of 100 years of human experience: hours and hours in front of the screen have been replaced in a flash thanks to the computing power of over 280 thousand processors and 256 graphics cards.
Every day artificial intelligence has dueled against different versions of itself in the equivalent of 180 years of games, until learning the winning strategies. But above all a fundamental art in team play, the one that in theory during a football match would lead us to pass the ball to the teammate unmarked closer to the goal, instead of trying the impossible goal: cooperation. The researchers managed to teach it to him through a rewards system: they started by pursuing individual goals, but as the game continued, more value was given to shared goals.
Hence the victory that confirms an already known capacity of the machines, as well as their potential winning weapon: they know how to collaborate better than us, voted for only one logic, that of efficiency. “Artificial intelligence has no ego and are willing to sacrifice a player without problems for the common good,” said Greg Brockman, one of OpenAI’s founders, at The Verge, a landmark site in the hi-tech world, adding that a game, to have fun, they mixed a human being to the supercomputer team. The result? He would never have felt so supported in his life.