Category: Go

EGC Oberhof 2017

My travel plans lined up to allow me to go to the European Go Congress in Oberhof, Germany, this year. Here’s a brief summary of my experiences.

My US rating has been pretty stable at 3 dan for years, and I registered as 3 dan for this tournament. That was probably a mistake; turns out there are many European 1- and 2-dans who could compete as 3 dan in the US.

I ended up with 2 wins and 8 losses in the main tournament, 1 win and 4 losses in the weekend tournament. All my games were interesting, and I’ve learned a lot, but it’s hard not to let your tournament performance affect your mood as well as your play. The first week was also colored by jet lag and almost constant rain, which didn’t help.

Some comparisons to the US Go Congress:

  • Main tournament has ten rounds instead of six. That is great, except when you keep losing.
  • Two hours per player instead of 90 minutes. I like the longer time limits, but 4-hour games are exhausting; maybe I should have used the sealed move and taken a break for lunch. Also, starting at 10 am instead of 9 am followed by a longer game caused the timing of meals to be weird.
  • Weekend tournament (five rounds, one hour time limit) is an added bonus. Also some other side tournaments not seen in the US: Chess & Go, Yose Go, Phantom Rengo.
  • Tournament times were not coordinated well with hotel meal schedules: some tournaments started at 5:45 pm, while dinner was not available until 6 pm. The nearby town had a lot of good food options, but scheduling was tricky.
  • Fewer pros, fewer lectures, fewer game reviews: The US Go Congress does a better job at organizing pro events.
  • Cheaper: I paid about as much for two weeks here as for one week at the US Go Congress.
  • More people: This was the biggest European Go Congress ever, with over 900 players.
  • You hear many more languages. I could use my German, Swiss German, Norwegian, and a bit of French, but English will carry you through without problem.

The other difference is more personal: at the US Go Congress, I know all the organizers and lots of players, and they know me. Here I’m mostly incognito. I got to know a bunch of players, but it still feels quite different.

Overall a great experience, even though I’m not happy with my results. Next year is in Pisa, Italy; 2019 is in Brussels, Belgium. My advice if you can make it:

  • Try to get there a few days early to recover from jet lag.
  • Possibly adjust your rank; seems to be at least one rank difference in the low dans.
  • Figure out your plan for 4-hour games: bananas, chocolate, energy bars, coffee, whatever it takes to keep your concentration.

Best of luck to everybody now at the US Go Congress!

Highest Possible Pinnacle?

DeepMind announced that AlphaGo will no longer compete: “This week’s series of thrilling games with the world’s best players … has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.”

This reason is rubbish. Could AlphaGo repeat its string of 60 victories in no-komi games? Could it win a match giving handicap stones? If AlphaGo wanted to keep competing, there are many more challenges left for it to conquer.

DeepMind used Go as a very successful testbed for its deep learning algorithms: a testbed that has measurable outcomes and can generate its own test data. Winning against the world’s best doesn’t make that testbed obsolete. DeepMind said that this year’s version was using ten times less computing power than last year’s AlphaGo. Could they improve the algorithms by another factor of ten? Hundred? Thousand? Yes, by all means push into other domains and apply what you’ve learned, but don’t abandon the testbed. You have ideas on how to improve your learning algorithm for medical diagnosis or self-driving cars? Testing the effectiveness of those improvements will be a lot harder than in Go.

I’m glad the DeepMind team is publishing a set of 50 AlphaGo self-play games, and that they’re working on a teaching tool. But not pushing AlphaGo forward competitively is a mistake.

Moves to Unique Game

The Ke Jie vs. AlphaGo games quickly reached a position that was not in the GoGoD game collection of almost 90,000 professional game records: Game 1 was unique at move 5, game 2 was unique at move 7. To me, this seemed very early, and @badukaire on Twitter got me to wonder: How soon does a pro game usually reach a position that’s different from any previously played game?

Number of moves to unique game

Time for some data: I ran SmartGo’s fuseki matching on the whole GoGoD game collection (excluding handicap games). In that data set, the highest probability for a move to become unique is at move 8; the median is between move 11 and 12; the average is about move 13. Games are unique by move 7 in about 16% of games; by move 5 in only about 4%.

So it’s somewhat unusual to diverge from standard play that early, but there’s more variety of play early in the game than I expected. Also, I’m sure that a lot of games will soon be copying those moves by AlphaGo and Ke Jie, and those opening moves will be unique no more.

Search and URL Scheme

The newest versions of SmartGo Kifu and SmartGo for Macintosh both include the enhanced names dictionary by John Fairbairn (GoGoD), with mini-biographies of over 4,000 players: life, career, status, teacher, Go style, and notes. Just tap on the player name above the board to see the biography.

Blog takemiya bio 

Improved search

The names dictionary includes translations as well as alternate names, and these are now used to significantly improve searching for players. Just type in the search bar, and it will try to match any property containing that text.

You can use ! to negate, e.g. type ‘Kato !Masao’ to look for all the other players named Kato. Anything that looks like a four-digit year will be matched to the date property, and you can search for a range, so e.g. ‘1990-1994 Takemiya’ will search for games Takemiya played during those years.

For more precise searches, you can test for specific properties and conditions, and combine conditions using & (and) and | (or). For example, you can type ‘winner=Lee Sedol & result~~0.5’ to find half-point wins by Lee Sedol (spelled Yi Se-tol in the game collection).

Blog winner lee sedol 

URL scheme

This kind of search is powerful within the app, but you can now access it from other apps too, thanks to the smartgo:// URL scheme. For example, the following link will get you directly to Shusaku’s ear-reddening move:

smartgo://games?id=1846-09-11a#127

Or find all the games played between AlphaGo and Ke Jie:

smartgo://games?player==AlphaGo & player==Ke Jie

Or find cool kyu-level problems:

smartgo://problems?coolness=10 & difficulty<=1k

Recent games of Gu Li playing black against Lee Sedol:

black==Gu Li & white==Lee Sedol & date>=2012

Games that Takemiya won by resignation playing black against a 9 dan:

smartgo://games?black=Takemiya Masaki & result=B+R & rankw=9d

Games played in the Kisei or Honinbo tournaments:

smartgo://games?event~~Kisei | event~~Honinbo

Three-stone handicap games played in the ’90s:

smartgo://games?handicap=3 & date>=1990 & date<=1999

Single-digit kyu life and death problems:

smartgo://problems?difficulty<=1k & difficulty>=9k & genre~~life

Please let me know how you use this new feature, and what could make it more useful to you.

Properties and operators

Here’s the complete list of properties currently supported (SGF tag):

  • Player: player (PB/PW), black (PB), white (PW), winner (PB/PW/RE), loser (PB/PW/RE), rankb (BR), rankw (WR).
  • Game info: id (GN), date (DT), event (EV), round (RO), komi (KM), handicap (HA), oldhandicap (OH), result (RE), rules (RU), time (TM), source (SO), analysis (AN), user (US), comment (GC).
  • Problems: difficulty (DI), coolness (CO), genre (GE).
  • Special: favorite (FA), any (any game info property).

The following operators are supported (comparisons are not case sensitive):

  • == or = : Equal
  • != : Not equal
  • ^= : Starts with
  • ~~ : Contains
  • !~ : Does not contain
  • >= : At least
  • <= : At most

Go Congress 2016

I really enjoyed the Go Congress in Boston this year. Some observations:

    • Next year, I will bring a 9.7″ iPad. The 12.9″ iPad Pro just doesn’t fit well between Go boards at the tournament, so I ended up using my iPhone to record games. Luckily, there’s an app for that.
    • Brady Daniels makes a good case that you should come to the next Go Congress. And Kevin’s Go Talk about “What did you like most about the Go Congress?” clearly shows that people are a main feature, not just Go. Indeed, it was great to meet many old friends again, and meeting new ones in real life for the first time, in particular David Nolen, John Tromp, and Jonathan Hop.
    • I always get a lot of valuable feedback from SmartGo Kifu and Go Books users at the Congress, mostly positive, some feature requests. Here’s a happy SmartGo user from Kyoto: Go instructor Yasuko Imamura.

Yasuko Imamura

  • There were several interesting talks about AlphaGo (watch the Opening Keynote and AlphaGo Insider). It’s clear that AlphaGo is adding to and not taking away from Go. I’m really looking forward to the commented AlphaGo games the DeepMind team teased several times.
  • I just realized that I never made it to the vendor area in the basement. Future Congress organizers: please put the vendors where everybody sees them.
  • The 13×13 tournament is usually a fun warm-up for the main tournament, hope it will be back next year.

Looking forward to San Diego in 2017! See you all there.

Wishful Thinking

Lee Sedol’s strategy in game 4 worked brilliantly (well explained in the excellent Go Game Guru commentary). It took AlphaGo from godlike play to kyu-level petulance. When it no longer saw a clear path to victory, it started playing moves that made no sense.

AlphaGo optimizes its chance of winning, not its margin of victory. As long as that chance of winning was good, this worked well. When the chance of winning dropped, AlphaGo’s quality of play fell precipitously. Why?

Ineffective threats

The bad moves that AlphaGo played include moves 87 and 161: threats that just don’t work, as they can easily be refuted, and either lose points, or at least reduce future opportunities. When AlphaGo plays such a move, it’s smart enough to find the correct local answer and figure out that the move doesn’t actually work. However, the Monte Carlo Tree Search component (MCTS) will also look at other moves that don’t answer that threat, as there is always a chance that the opponent plays elsewhere. Thus AlphaGo sees a non-zero chance that this threat actually works, and the way MCTS calculates the statistics it thinks that this increases its chance of winning.

Of course, the opposite is true. Playing a threat that can easily be refuted is just wishful thinking. The value network would figure out that such an exchange actually makes the position worse, but it doesn’t know that it should override the Monte Carlo simulations in this case.

Adjusting komi

One way to avoid this effect is to internally adjust the komi until the program has a good chance of winning. This causes the program to play what it thinks are winning moves, while in fact it will lose by the few points you artificially adjusted the score. If the opponent makes a mistake, the program might regain a real winning position later. (SmartGo uses this technique; it also helps play more reasonable moves in handicap games.)

For AlphaGo, that technique won’t work well: as I understand it, the value network is trained to recognize whether positions are good for Black or for White, not by how many points a player is ahead.

Known unknowns

Another idea is to look at the source of uncertainty in MCTS. The Monte Carlo winning percentages are based on statistics from the playouts, and there are many uncertainties in that process due to the random nature of the playouts and the limited nature of the search. The more moves you look at, the smaller the unknowns become, and the statistical methods used to figure out which moves to explore more deeply and how to back up results in the search tree try to minimize these uncertainties.

However, whether the opponent will answer a threat is a yes-or-no decision; it should not be treated like a statistical unknown. In that case, you want to back up the results in the tree using minimax, not percentages. Something for the DeepMind team to work on before they challenge Ke Jie, so AlphaGo won’t throw another tantrum.

AlphaGo Don’t Care

AlphaGo is badass. Like the honey badger, AlphaGo just don’t care.

Lee Sedol may have underestimated AlphaGo in game 1, but he knew what he was up against in game 2. I watched Michael Redmond’s commentary during the game, then Myungwan Kim’s commentary this morning. The Go Game Guru commentary is also very helpful.

The tenuki at move 13: Professionals always extend at the bottom first? AlphaGo don’t care. It builds a nice position at the top instead.

The peep at move 15: This is usually played much later in the game, and never without first extending on the bottom. AlphaGo don’t care. It adds 29 later, and makes the whole thing work with the creative shoulder hit of 37. It even ends up with 10 points of territory there.

With 64 and 70, Lee Sedol made his group invulnerable to prepare for a fight at the top. AlphaGo don’t care, it just builds up its framework, and then shows a lot of flexibility in where it ends up with territory.

Lee Sedol threatens the territory at the top with 166? AlphaGo don’t care, it just secures points in the center instead. Points are points, it doesn’t matter where on the board they are.

What can Lee Sedol do in the next games? I think he needs to get a complicated fight going early in the game, start ko fights, in general increase the complexity. But I fear AlphaGo just won’t care.