Author: Anders Kierulf

Game Center

SmartOthello as released mid-August:

Blog othello avatar

And here’s SmartOthello mid-October:

Blog othello no avatar

The layout of the app was designed with profile pictures in mind. These player avatars disappeared when iOS 10 was released: Game Center leaderboards show boring gray circles, GKPlayer.loadPhoto returns nil.

Some bug reports and a Technical Support Incident later, this appears to be Apple’s intended behavior, not just a glitch. This behavior is so wrong and unlike Game Center that I think Apple will eventually backtrack, but waiting and hoping is not an option: I could not leave SmartOthello in that broken state. The newest version adds the ability to set your own profile picture and uses CloudKit to share these between players.

Matchmaking with iMessage

At WWDC in June, Apple announced that the Game Center app was going away, but not to worry, the Game Center functions were all still going to be there. Player invites would be using the newly improved iMessage; no code change needed. (Sure.)

That may have been true for the simplest matchmaking scenarios, but not for SmartOthello. I’m allowing players to set their color preference to black, white, or neutral, and their opening preference to regular or random. To start a game, I thus need that information from both players. (If they both prefer the same color, color choice will be random; random opening will only be applied if both players agree.) This added negotiation step needed extra work in iOS 10.

Blog invitation accepted

Starting a game over iMessage is cumbersome, adding several extra taps to specify opponent and start the game, as well as context switches. Apple has work left to do there. In particular, there needs to be a way to bypass the confusing auto-match screen.

Why no profile pictures?

My guess is that the missing profile pictures are related to using iMessage for matchmaking. Many iMessage users have images associated with them, but those are through the user’s contacts. There’s no way to map Game Center players to contacts, and for privacy reasons, it’s obvious that Apple won’t make those images available through Game Center.

It would be easy for Apple to add back a profile picture in Settings > Game Center. However, when starting a match through iMessage, that opponent would then have two images: one from Game Center and one from iMessage. It’s a mess, and that may be why the Game Center images were removed. Apple dug this hole for themselves; I hope they can dig their way back out.

Game Center in SmartGo

One of my goals with SmartOthello was to learn Swift (which worked out perfectly) as well as gain experience with technologies like Game Center and iCloud before including them in SmartGo. My experience with Game Center has not been good (poor and outdated documentation, APIs not working as advertised, no way to avoid polling for invites), and Apple doesn’t seem to be paying a lot of attention to the future of Game Center. Removing the avatars was a poor decision, and matchmaking using iMessage needs a lot of work.

At least I know to steer clear of Game Center for SmartGo.


My Othello app is now available in the App Store — check it out at Even if you’re not interested in Othello/Reversi, it will give you an idea of the future direction of my Go apps. And next time you play Go and somebody asks whether that’s Othello, at least now you have an app you can recommend.

SmartOthello is 100% Swift: it was a perfect way to learn Swift while building up code I can reuse for my Go apps. It’s also my first app to support Game Center, including achievements and leaderboards. My experience with Swift has been really good; my experience with Game Center less so.

SmartOthello is also a reboot in terms of user interface. The clean design that Scott Jensen came up with for Othello will definitely influence the Swift version of SmartGo. For example, the games list sliding in from the left leaves more room for the board on the iPad; the ability to turn off the status bar again provides more room and less distraction.

The tutorial in SmartGo Player uses Go Books under the hood, so the Swift version of Go Books is up next. Yes, this conversion is taking a while, but I’m planning to live with these apps for many more years. After launching my first Swift app, I’m more convinced than ever that the investment is worth it.

Go Congress 2016

I really enjoyed the Go Congress in Boston this year. Some observations:

    • Next year, I will bring a 9.7″ iPad. The 12.9″ iPad Pro just doesn’t fit well between Go boards at the tournament, so I ended up using my iPhone to record games. Luckily, there’s an app for that.
    • Brady Daniels makes a good case that you should come to the next Go Congress. And Kevin’s Go Talk about “What did you like most about the Go Congress?” clearly shows that people are a main feature, not just Go. Indeed, it was great to meet many old friends again, and meeting new ones in real life for the first time, in particular David Nolen, John Tromp, and Jonathan Hop.
    • I always get a lot of valuable feedback from SmartGo Kifu and Go Books users at the Congress, mostly positive, some feature requests. Here’s a happy SmartGo user from Kyoto: Go instructor Yasuko Imamura.

Yasuko Imamura

  • There were several interesting talks about AlphaGo (watch the Opening Keynote and AlphaGo Insider). It’s clear that AlphaGo is adding to and not taking away from Go. I’m really looking forward to the commented AlphaGo games the DeepMind team teased several times.
  • I just realized that I never made it to the vendor area in the basement. Future Congress organizers: please put the vendors where everybody sees them.
  • The 13×13 tournament is usually a fun warm-up for the main tournament, hope it will be back next year.

Looking forward to San Diego in 2017! See you all there.


That separate Swift project I hinted at in December? Time to announce what it is: an app for Othello (also known as Reversi).

Why Othello?

As a two-player board game, Othello is similar enough to Go that much of the Swift code for an Othello app can be reused for Go. But Othello apps are a dime a dozen in the App Store: who needs another one? Well, you do — you deserve better than the current crop of Othello apps.

Relevant experience

Most people associate me with only one game: Go. However, I do have a bit of history with Othello.

  • Computer Othello: My first Othello program played in a tournament in Santa Cruz in 1981, long before I first made it to the USA. My work on Othello got Prof. J. Nievergelt to introduce me to Go, and my Ph.D. thesis included a chapter on Othello (“Smart Game Board: a Workbench for Game-Playing Programs, with Go and Othello as Case Studies”).
  • Human Othello: I was Swiss Othello Champion in 1983, 84, 85, and 89, and United States Othello Champion in 1992. My tournament experience includes six Othello World Championships: Paris (1983 & 1988), Warsaw (1989), Stockholm (1990), New York (1991), and Barcelona (1992).

Unique combination

So yes, combining years of iPhone development, user interface experience from SmartGo, and expert knowledge of Othello, I do think I have something unique to bring to a crowded field of Othello apps.

I’ve been working on SmartOthello with designer Scott Jensen (@_scottjensen); it’s making good progress, and I have just started limited beta testing. I’m very excited about how it’s turning out, and what it means for the future of my Go apps.

More later. Meanwhile, you can sign up for news about SmartOthello at, and follow @smartOthello on Twitter or Facebook.

PS: I played in an Othello tournament in Los Angeles in March: 4 wins and 6 losses, definitely a bit rusty. At least I scored a 33-31 win against former World Champion Ben Seeley.

Wishful Thinking

Lee Sedol’s strategy in game 4 worked brilliantly (well explained in the excellent Go Game Guru commentary). It took AlphaGo from godlike play to kyu-level petulance. When it no longer saw a clear path to victory, it started playing moves that made no sense.

AlphaGo optimizes its chance of winning, not its margin of victory. As long as that chance of winning was good, this worked well. When the chance of winning dropped, AlphaGo’s quality of play fell precipitously. Why?

Ineffective threats

The bad moves that AlphaGo played include moves 87 and 161: threats that just don’t work, as they can easily be refuted, and either lose points, or at least reduce future opportunities. When AlphaGo plays such a move, it’s smart enough to find the correct local answer and figure out that the move doesn’t actually work. However, the Monte Carlo Tree Search component (MCTS) will also look at other moves that don’t answer that threat, as there is always a chance that the opponent plays elsewhere. Thus AlphaGo sees a non-zero chance that this threat actually works, and the way MCTS calculates the statistics it thinks that this increases its chance of winning.

Of course, the opposite is true. Playing a threat that can easily be refuted is just wishful thinking. The value network would figure out that such an exchange actually makes the position worse, but it doesn’t know that it should override the Monte Carlo simulations in this case.

Adjusting komi

One way to avoid this effect is to internally adjust the komi until the program has a good chance of winning. This causes the program to play what it thinks are winning moves, while in fact it will lose by the few points you artificially adjusted the score. If the opponent makes a mistake, the program might regain a real winning position later. (SmartGo uses this technique; it also helps play more reasonable moves in handicap games.)

For AlphaGo, that technique won’t work well: as I understand it, the value network is trained to recognize whether positions are good for Black or for White, not by how many points a player is ahead.

Known unknowns

Another idea is to look at the source of uncertainty in MCTS. The Monte Carlo winning percentages are based on statistics from the playouts, and there are many uncertainties in that process due to the random nature of the playouts and the limited nature of the search. The more moves you look at, the smaller the unknowns become, and the statistical methods used to figure out which moves to explore more deeply and how to back up results in the search tree try to minimize these uncertainties.

However, whether the opponent will answer a threat is a yes-or-no decision; it should not be treated like a statistical unknown. In that case, you want to back up the results in the tree using minimax, not percentages. Something for the DeepMind team to work on before they challenge Ke Jie, so AlphaGo won’t throw another tantrum.

AlphaGo Don’t Care

AlphaGo is badass. Like the honey badger, AlphaGo just don’t care.

Lee Sedol may have underestimated AlphaGo in game 1, but he knew what he was up against in game 2. I watched Michael Redmond’s commentary during the game, then Myungwan Kim’s commentary this morning. The Go Game Guru commentary is also very helpful.

The tenuki at move 13: Professionals always extend at the bottom first? AlphaGo don’t care. It builds a nice position at the top instead.

The peep at move 15: This is usually played much later in the game, and never without first extending on the bottom. AlphaGo don’t care. It adds 29 later, and makes the whole thing work with the creative shoulder hit of 37. It even ends up with 10 points of territory there.

With 64 and 70, Lee Sedol made his group invulnerable to prepare for a fight at the top. AlphaGo don’t care, it just builds up its framework, and then shows a lot of flexibility in where it ends up with territory.

Lee Sedol threatens the territory at the top with 166? AlphaGo don’t care, it just secures points in the center instead. Points are points, it doesn’t matter where on the board they are.

What can Lee Sedol do in the next games? I think he needs to get a complicated fight going early in the game, start ko fights, in general increase the complexity. But I fear AlphaGo just won’t care.

Four More Games

AlphaGo’s victory over Lee Sedol last night was stunning. I’m still gathering my thoughts and trying to figure out what happened.

The game analysis at Go Game Guru has been very helpful. But I have to wonder whether I can trust the commentary — maybe AlphaGo knew what it was doing?

Move 80 was described by Younggil An 8p as ‘slack’. I wonder whether AlphaGo at that point already calculated that it was winning, and that eliminating the aji (latent possibilities) in that area would be the best way to reduce the risk of losing. I would love to know more about AlphaGo’s evaluation of that move.

AlphaGo demonstrated that it’s good at fighting, and would not back down from a fight. It also showed excellent positional judgement and timing, managing to invade on the right side with 102, get just enough out of that fight, and end with sente to play the huge move of 116 to take the upper left corner. And it’s not letting up in the endgame once it sees a path to victory. We have not seen any ko fights yet, but there’s no reason to believe AlphaGo couldn’t handle those well.

For the remaining games, I think Lee Sedol must establish a lead by mid game at the latest to have a chance of winning. As the game gets closer to the end, there are fewer moves to consider, and there are fewer moves for the Monte Carlo playouts to reach the end of the game, so AlphaGo will just get stronger.

Move 7 was a new move. At least, it’s not in the GoGoD database of 85,000 professional game records. With SmartGo’s side-matching, only two games (both played in 2013) match that right-side position. He probably tried to make sure AlphaGo couldn’t just rely on known patterns, but that gambit didn’t pay off. I don’t think Lee Sedol will try a similar move tonight.

There are four more games; I would not count Lee Sedol out yet. He now knows what AlphaGo can do, and won’t underestimate it again. We have some very exciting games to look forward to.