Early this year, a novice Go player soundly defeated one of the best computer programs at the game. They did this by employing an approach developed with the help of a program scientists made to test the security of systems like KataGo for loopholes.
After AlphaGO’s historic win in 2016, human players have been praised for showing more incredible innovation. Human Go players have become less predictable in recent years, according to a study published in the journal PNAS by academics from the City University of Hong Kong and Yale.
According to the New Scientist, the study’s findings were based on analyzing more than 5.8 million Go movements from professional games played between 1950 and 2021. They developed a metric known as a “decision quality index,” or DQI, with a “superhuman” Go AI, a program that can play the game and rate the quality of any single move.
The team concluded that professional play quality had not increased significantly from one year to the next before 2016 after assigning a DQI score to every activity in their dataset. The group saw a median annual DQI change of 0.2 in the positive direction at most.
The level of play declined in some years. Median DQI values have changed at a pace greater than 0.7 since 2018 when humanoid AIs first emerged. And throughout this period, the strategies used by professional players have become more creative. In 2018, players initiated a previously unseen play combination in 88% of games, up from 63% in 2015. The team writes
“Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.”
That’s a novel twist, to be sure, but it’s not counterintuitive, either. In an interview with the New Scientist, UC Berkeley professor Stuart Russel said, “it’s not surprising that players who train against machines will tend to make more moves that machines approve of.”