Tag: AlphaGo

KataGo and ChatGPT Failures

AlphaGo was an amazing breakthrough and very impressive in its ability to win against professional go players. It was really surprising that evaluating a go position using neural nets and machine learning would work as well as it did. And yet it kept beating professionals.

But now, some cracks are showing. In their excellent paper “Adversarial Policies Beat Superhuman Go AIs” (https://goattack.far.ai), Tony Wang, Adam Gleave, et al. are using an adversarial approach to figure out techniques that work against KataGo. And indeed: after a few tries, I managed to kill a huge group and win, and KataGo did not see it coming until too late. It doesn’t realize that its circular group that surrounds a dead group with a large eye needs enough outside liberties to actually remove that dead group.

Katago losing
KataGo doesn’t realize the dead white group has more liberties than the surrounding black group

This technique takes advantage of KataGo (1) not knowing enough about liberties and (2) not knowing enough about the topology of blocks on the go board. When training its model, there are special inputs only for 1, 2, or 3 liberties. There’s also no explicit concept of adjacent blocks; the neural net has to learn that concept from the board position. And the way the neural net is trained, it doesn’t create good enough abstractions for those concepts. In normal games, this is sufficient to play better than any human, but in corner cases, it falls apart.

I would argue that blocks of stones, liberty counts, and race to capture are an essential part of the underlying model you need when playing go (see e.g. Richard Hunter’s book “Counting Liberties and Winning Capturing Races”). And machine learning (at least the way we’re doing it now) is not a great way to build that model. You’ll end up with gaps of knowledge and approximations that will fall down at critical points.

Against that background, the failures of ChatGPT make more sense. Machine learning didn’t build a model of the world, it just learned to put words together in a way that seems to make sense. Often impressive, but a lot of recent examples demonstrate that it doesn’t actually understand what’s going on.

And machine learning for self-driving cars is also based on lots of inputs, but only a very limited model of the world. Like KataGo, it will fail in corner cases. And that’s scary.

Tournament Mode

I just added a new feature to SmartGo One: Tournament Mode. Basically, it turns off all smarts while you’re recording a game, and makes it clearly visible that you’ve turned them off.

Using an iPad or iPhone for game recording is a lot easier than pen and paper: no move number to remember, just tap the screen after each move. But with AI now much stronger than almost all players, even on mobile devices, those features can’t be accessible while recording.

Here’s how Tournament Mode works in SmartGo One:

Start recording: The only way to turn it on is to start recording a new game (in My Files, tap on + at top, then New Game). Enable the Tournament Mode switch, and the top right action changes from an orange ‘Play’ to a green ‘Record’.

Tournament mode new game

During the game: While recording that game, all AI functions as well as joseki matching are disabled. A clearly visible green bar at the top indicates that you’re in recording-only mode.

Tournament mode recording

End recording: When you’re done recording, tap on the popup menu in the lower left of the board, and tap on End Recording. This immediately removes the green bar at the top, and re-enables AI features.

Tournament mode end

If you switch to another game at any time, you’re also taken out of Tournament Mode. Once you’re out of tournament mode, the only way to get back in would be to start a fresh recording with an empty board.

Note that all the features that make SmartGo One so great for game recording are still available. For example, if you missed a pair of moves, you can go back and insert those; if you misplaced a move, tap and hold on that stone, and choose Replace Move.

I hope that Tournament Mode will allow both opponents and tournament organizers to feel confident that SmartGo One is being used for recording only. From the rules of the Dutch Open:

“Recording your game is permitted on a digital device, as long as the screen remains visible for your opponent at all times. And your opponent has to agree with recording the game digitally. If you want to record your game digitally, this will only be allowed on applications vetted in advance by the organization of the tournament, to make sure it does not have AI functionality. Recording your game with a paper kifu is of course permitted.”

These seem like good rules, especially making sure that the screen is visible to the opponent at all times. If that green bar ever disappears, tell your opponent to put the phone away.

Please let me know how this feature works for you, either as a player or a tournament organizer. Any tweaks that would make it better?

Using Strong Go Programs on Macintosh

SmartGo for Mac is not playing strongly, as computer play is using my own pre-AlphaGo engine. However, like SmartGo for Windows, you can use GTP (Go Text Protocol) to connect to strong engines to play against.

The most recent version of SmartGo for Macintosh (0.8.18) includes some improvements in how it handles GTP engines. It’s not perfect, there’s much more to be done, but hopefully it will tide you over while I keep my focus on the new SmartGo for iOS.

The first step is downloading and installing the computer go engines you want to connect to. Here are three I’ve tested with SmartGo for Mac, from easy to hard to install. All assume that you’re somewhat comfortable using the Terminal app; check out this iMore guide if you’re new to the command line.

Pachi

The easiest way to install Pachi on the Mac is using Homebrew (which you probably have to install first). Follow these instructions:

https://brewinstall.org/Install-pachi-on-Mac-with-Brew/

Leela Zero

Find Leela Zero on Github, scroll down to I just want to play with Leela Zero right now, and follow the Homebrew instructions. You’ll also have to download a file with network weights; the link is in that same section.

KataGo

Installing KataGo is more complicated, as you have to compile it yourself. Follow the instructions for Linux at https://github.com/lightvector/KataGo.

smartgo-mac-gtp-preferences

Setting Parameters

Once you’ve installed an engine, you need to add it to SmartGo. Choose SmartGo > Preferences in the menu and click on GTP. Then click on the + icon and navigate to the executable of the engine you want to add. SmartGo uses the engine name to guess reasonable parameters, then tries to run the engine to get its name and version. If you see a green checkmark with the name and version, you’re all set. Otherwise, edit the parameters sent to the GTP engine (the third column in the table). The following basic settings work for my setup:

Leela Zero: -g –playouts 1000 –noponder -w /usr/local/Cellar/leela-zero/0.17/best-network/40b_257a_64k_q

KataGo: gtp -model /Users/anders/work/katago/cpp/models/model.txt.gz -config /Users/anders/work/katago/cpp/configs/gtp_example.cfg

Leela Zero and KataGo take a while to initialize, so even just getting name and version initially can take a minute, and SmartGo may time out. If it does, just try starting a game against the engine anyway (File > New Game, specify the engine in the dropdown for Black or White), and see if it works.

I hope these instructions get you pointed in the right direction. I’m sorry none of this is as easy as it should be.

Highest Possible Pinnacle?

DeepMind announced that AlphaGo will no longer compete: “This week’s series of thrilling games with the world’s best players … has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.”

This reason is rubbish. Could AlphaGo repeat its string of 60 victories in no-komi games? Could it win a match giving handicap stones? If AlphaGo wanted to keep competing, there are many more challenges left for it to conquer.

DeepMind used Go as a very successful testbed for its deep learning algorithms: a testbed that has measurable outcomes and can generate its own test data. Winning against the world’s best doesn’t make that testbed obsolete. DeepMind said that this year’s version was using ten times less computing power than last year’s AlphaGo. Could they improve the algorithms by another factor of ten? Hundred? Thousand? Yes, by all means push into other domains and apply what you’ve learned, but don’t abandon the testbed. You have ideas on how to improve your learning algorithm for medical diagnosis or self-driving cars? Testing the effectiveness of those improvements will be a lot harder than in Go.

I’m glad the DeepMind team is publishing a set of 50 AlphaGo self-play games, and that they’re working on a teaching tool. But not pushing AlphaGo forward competitively is a mistake.

Moves to Unique Game

The Ke Jie vs. AlphaGo games quickly reached a position that was not in the GoGoD game collection of almost 90,000 professional game records: Game 1 was unique at move 5, game 2 was unique at move 7. To me, this seemed very early, and @badukaire on Twitter got me to wonder: How soon does a pro game usually reach a position that’s different from any previously played game?

Number of moves to unique game

Time for some data: I ran SmartGo’s fuseki matching on the whole GoGoD game collection (excluding handicap games). In that data set, the highest probability for a move to become unique is at move 8; the median is between move 11 and 12; the average is about move 13. Games are unique by move 7 in about 16% of games; by move 5 in only about 4%.

So it’s somewhat unusual to diverge from standard play that early, but there’s more variety of play early in the game than I expected. Also, I’m sure that a lot of games will soon be copying those moves by AlphaGo and Ke Jie, and those opening moves will be unique no more.