Although I found the AI a challenge for my first few attempts, I learned quickly and, as is often the case with board game AIs, was soon able to defeat it a large majority of the time in one-on-one games. Its biggest weakness, I’ve observed, is that it does not to give enough (or perhaps any, it is hard to tell) consideration to the likelihood that you will be able to win on your next move. It will make an otherwise-sensible preparatory move to improve its odds of completing a column on its next move, without realizing that it isn’t likely to *get* a next move, and should instead shoot for a win immediately, even if the odds of success are small.

The choice between attempting to win on one’s current move or instead building up power to try to win on a subsequent move is a common dilemma in games; it’s embodied in a very pure form in Can’t Stop (and other press-your-luck dice games such as Nada), but it occurs frequently in other games in a more complex, harder-to-quantify way; deciding when to stop building units and launch a final assault in a military game, or whether to call an opponent’s all-in in poker vs. folding and trying to find a better spot, or when to change gears from building power to going all-out for victory points in many Euro games.

The case of Can’t Stop is particularly easy to approach mathematically, at least for a first-order solution.

Let’s consider an endgame situation in a two-player game of Can’t Stop, in which the two players have chances of x and y respectively of winning on any given move, prior to their first roll, assuming they commit to continue rolling until they either win the game or fail in a roll and lose their turn. We can write the first player’s odds of winning on his turn in an iterative fashion as:

w(x,y) = x + (1-x)(1-y)w(x,y)

In other words, the player’s chance of winning are his chance of winning immediately, plus his total odds of winning if his next turn comes around and no one has won yet. Since nothing has changed in that case, those total odds of winning are the same as the total odds now.

Rearranging this equation, we can isolate the function w(x,y) and therefore write it non-iteratively:

w(x,y) = x / (x + y – xy)

On the second player’s turn, it’s clear that the first player’s odds of winning, which we will call w’(x,y), are simply multiplied by a factor of (1 – y), since the second player failing to win immediately reverts us to the first situation. I.e.

w’(x,y) = (1 – y)w(x,y)

If a player, therefore, has an opportunity to pass up a shot at the win in return for multiplying his odds on subsequent moves by a factor of A, we are comparing his basic chance of winning w(x,y) to his alternative chances w’(Ax,y). In order for him to choose to make his preparatory move, then, these latter odds must be greater:

w’(Ax,y) > w(x,y), or

(1-y)Ax / (Ax + y + Axy) > x / (x + y + xy)

Cancelling out the factors of x on both sides and multiplying the denominators across yields:

A(1-y)(x + y + xy) > (Ax + y + Axy)

Which, after some multiplication and simplification finally gives us:

A > 1 / (1 – x – y + xy)

Thus, knowing our own and our opponent’s chances of winning on a given move, we can figure out the minimum factor by which we’d need to improve our odds in order to be willing to allow the opponent to take the first crack at victory.

**Example application**: *I believe my opponent has about a 25% chance of winning on his next move. I also think that I can approximately double my own chances of winning on my next move if I make a preparatory move now. How great must my chances of immediate victory be in order for me to take the shot now, rather than improving my position for next turn?*

2 < 1 / (0.75 - 0.75x)

1.5 - 1.5x < 1

1.5x > 0.5

x > 0.333

*In words, if I have better than 1/3 odds of winning immediately, I should take my shot now. Any worse, and I should make my preparations and double my chances of winning on a later turn. For many players, this is likely counter-intuitive; the risk averse will often insist on a greater than 50% chance of winning before they take their shot, which is far from correct.*

Now, there are a couple of interesting things about this equation:

First, it’s symmetric in x and y. It doesn’t matter whether we have a 20% chance of winning and our opponent has a 5% chance, or we have the 5% chance and our opponent has 20%; either way, the factor by which our odds would have to improve to make the preparatory move are the same. It’s important to remember, however, that A is a factor, not a straight percentage… for the 20%/5% situation, A = 1.27, so the player with 20% needs a minimum improvement of 5.4% (i.e. a 25.4% chance of winning on a given move after making the preparatory move), while the player with 5% only needs an improvement of 1.35%.

Second, although it appears to apply only to the two-player situation, it can in fact be easily adjusted to a multiplayer game, provided we don’t care which opponent wins if it isn’t us. Simply replace y by the probability that *any* opponent wins, i.e. y = 1 – (1-y1)(1-y2)…(1-yn) where y1…yn are the various opponents’ individual chances of winning on a given turn.

Third, it’s easy to see that for x >= 0.5, then there is no value for y which would make a preparatory move desirable; A would exceed 2, which given that x >= 0.5, would mean we would have to improve to x > 1, i.e. a greater-than-100%-chance of winning, which is impossible. This draws our attention to one of the limitations of the math, however; I did say this is only a first-order solution, by which I mean that it neglects the opponent’s potential for improvement.

Say, for instance, I have a 50% chance of winning on the current turn, while my opponent has only a 1% chance. If my opponent cannot improve his chances, then by shooting for an immediate win, my odds of victory are (0.5) / (1 – 0.5*0.99) = 99.01%, which are still slightly better than my 99% odds if I allow my opponent his 1% long-shot first in return for even a 100% chance of winning if he misses.

If, however, both my opponent and I have available to us a move that would increase our odds to 100%, then I have only a 66.7% chance of winning by going for it immediately (two 50/50 shots before he gets his second turn), whereas if I improve myself first, he is forced to take his 1% shot rather than doing likewise, giving me the aforementioned 99% chance of victory.

Still, barring extreme examples like the opponent who can improve from 1% to 100% odds with a single preparatory move, this first-order solution can still be very usefully applied to gain ballpark estimates for real, in-game situations. Moreover, the result is exact as long as the opponent’s potential for improvement (as a factor of their current chances) is similar to one’s own; due to the symmetry of the equation, if the players’ respective odds are such that one player should not be delaying his attempts to win any further, then the same likely goes for his opponent.

]]>This morning, I had an idea for another bluffing game, but one so simple that I could never market it in its basic form, and indeed would be surprised if I turned out to be the first to have come up with it. I haven’t tested it yet, but it’s one of these games that’s so simple as to be self-evident; it’s probably susceptible to mathematical analysis, but if humans have a hard time playing Rock-Paper-Scissors randomly, I doubt that a computer generated table of optimal move-selection probabilities would help anyone become unexploitable in this game.

I am going to start by explaining the game as originally envisioned, as a two-player game. Additional multi-player rules will be given afterwards – both a simple version, and a more elaborate one. The simple version is much the same as the two-player game, but suffers from the problem of being highly dependent on seating order, and would be too easily abused by colluding players sitting next to one another. The more elaborate version would be much more appropriate for cash play, though of course I cannot recommend gambling if you live in a place where gambling is illegal, or are under the legal age.

Here it is:

**Mentalist Poker**

- Each player starts with 20 chips or counters of some sort, a screen to hide them, and a “betting line,” perhaps a piece of string or a pencil, placed horizontally in front of him (but behind the screen). Chips placed in front of the line are being bet, while those behind are not.
- Determine at random which player starts with the First Bettor button.
- Each round begins with both players making an ante of one chip.
- The First Bettor secretly decides on a bet, which can be anywhere between zero chips and all of his chips. He places these in front of the line, being sure to disguise his motions.
- The opponent attempts to guess how many chips he has bet, stating verbally “Zero,” or “Five,” or whatever number he likes.
- The bettor’s screen is lifted and his bet revealed. If the opponent’s guess was correct, the opponent immediately wins the pot
*and*the bettor’s bet. - If the opponent’s guess was incorrect, the bet is added to the pot, the screen is replaced, and now it is the opponent’s turn to make a bet, and the First Bettor’s turn to guess.
- If it is a player’s turn to bet and he has no chips left, the opponent is assumed to guess Zero, and automatically wins the pot and the game.
- Play continues back and forth until the pot is won. Assuming both players have at least one chip left, the First Bettor button is passed to the opponent, both players ante again, and a new round is begun.

And that’s it! A player would obviously like to bet Zero as much as possible to avoid adding chips to the pot, but cannot do this all the time, or it is too easy for his opponent to collect the antes. Betting One or Two instead is likely to buy the player a chance to take a guess of his own, but as the pot grows, so too does the range of bets the opponent might profitably make. On the other hand, as players approach a situation where all their chips are in the pot, the incentive to bet low begins to grow once more, as a player who commits all his chips now has only one more chance to guess correctly and win, or else lose everything. For this reason, a player who gets a chip advantage also gains a strategic advantage in subsequent large pots, thus fighting for the lead may be worth larger gambles than poker-style “EV” calculations might indicate.

**Simple Multiplayer**

The rules are exactly the same as for two-player, except that it is important that the Bettor be betting for the right to make the next guess. Thus, the Guesser is always the player to the *right* of the current Bettor, while the next Bettor will be the player to the *left*. The First Bettor button likewise passes left after each round. In other words, the betting passes around clockwise as in most games, but the Guesser is always the last person to have bet (or the player who was First Bettor in the last round).

(The reason this is vulnerable to collusion should be obvious – if I am sitting to your left, we agree beforehand that I will bet in a systematic way, allowing you to “guess” correctly most or all of the time and win the pot, presumably in return for a cut of the winnings, or for you allowing me to win when our positions are reversed. It’s equally bad for everyone else if I’m not colluding with you, but am simply a terrible player who e.g. bets Zero way too often.)

**Advanced Multiplayer**

- Antes are posted, and a secret bet selected as usual. No “First Bettor” button is needed, as play will pass continuously to the left.
- Larger chip stacks are required, as least equal to 10x the number of players. I.e. in a four-player game, each player should start with at least 40 chips. Multiple denominations are likely required, with players able to make change freely.
- The bet is capped at the current size of the pot. For instance, in a five-player game, with only the starting antes, the maximum bet would be Five.
- Instead of there being a single Guesser, everyone except the Bettor makes a guess, starting with the player to the Bettor’s right (i.e. the previous Bettor or the First Bettor of the previous round) and proceeding counter-clockwise.
- If
*anyone*guesses correctly, the Bettor says so immediately and lifts his screen, and that Guesser wins the bet and the entire pot. - If everyone has made a guess and
*no one*has guessed correctly, the Bettor lifts his screen to reveal his actual bet. Unlike the standard version, the Bettor*keeps*his bet, and then*claims*as many chips from the pot as he had bet. - Regardless of the outcome, everyone now antes one additional chip, and the next player to the left becomes the Bettor.

*Example: In a five-player game, there are 5 chips in the pot to start. If I am Bettor, I can make a bet between 0 and 5, i.e. 6 possible choices. The other players will have a total of 4 guesses. If I bet 3 and the guesses were 2, 1, 5 and 0, no one has guessed my bet. Therefore, I can claim 3 of the 5 chips from the pot, leaving 2. Everyone, including myself, now antes one more chip, making a pot of 7, and the player to my left must make a bet. As an added bonus, I will now get the first opportunity to guess.*

The strategy here is considerably different than the basic game, in that the risk-reward calculations for the Bettor are different. With multiple guesses instead of just one, there is a much larger chance of having my bet guessed. Thus, there is a temptation to surrender the pot by betting Zero a lot, and simply trying to win pots when I’m to the right of the First Bettor. If my bet went into the pot on top of this, I’d have an even further disincentive to bet aggressively, thus the game would be pretty boring. That’s why, in this multiple-guesses version of the game, a successful bet is taken *out* of the pot, rather than added to it. The pot size will also fluctuate constantly, rather than growing steadily until someone takes it down. The stakes (and thus available options) can grow quite large indeed if multiple players in a row make small bets that are not guessed.

Although Cash or Crash is themed as a game of commodity speculation, the fundamental mechanic will be familiar to fans of the traditional dice game known variously as Liar’s Dice, Perudo, Cachito or Santaba; players are invested in a bubble market for one or more of Oil, Wheat or Gold, trying to get rich by correctly timing the collapse and getting out at the right time. Each player is privy to his or her own “insider information” (face down cards) and must attempt to determine what cards the others have played based on their bids, all while bluffing them with his or her own bids.

Like Liar’s Dice, the players’ predictions for the total value of the commodities must constantly escalate, up until someone decides that the market is poised to collapse, whereupon he or she announces “SELL!” and the cards are revealed. If the market had indeed grown too big, the last bidder loses the round and his or her cards, while if it still had room to grow a bit bigger, the seller suffers the consequences instead.

Where Cash or Crash differs, however, is in offering a wider variety of tactical options and a poker-like escalation of stakes over the course of a round. Unlike Liar’s Dice, players cannot change from one commodity to another at will… but they are allowed to bid on multiple commodities at once. So, if the current bid is 4 Gold, the player cannot simply switch to 5 Oil, but can choose between increasing to 5 Gold, or branching out and saying “4 Gold and Oil,” which puts even more pressure on subsequent players, as these combination bids must turn out to be correct for *both* commodities. Alternately, players *may* switch to any single commodity of their choice, but only by investing an additional card, thus expanding the market (and their own knowledge of it), but simultaneously increasing the amount they stand to lose if they are the eventual loser of the round.

The game is playable by up to eight players (each with an identical stack of ten cards to start with) and the endgame becomes a tense war of attrition, culminating in an intense battle of wits between the final two players. The rules also include Blitz and Endurance rules, so you can play it as short filler or a longer, more serious competition, plus an optional Doubling rule for the nerviest sharks. Regardless of the version you play, Cash or Crash offers all the tension of the final table of a poker tournament, without the need for money on the line.

]]>Aside from my game design career, I’m a multi-talented creative freelancer. I’ll take on almost any kind of writing, editing, design, illustration or fine art contract, and my rates are reasonable. If you’re in the market for a writer or artist, please check out my portfolio and don’t hesitate to get in touch.

]]>As a refresher, the basic premise of the game is that players, in turn, get to pick what number they’re going to try to roll on a die. If only one player gets their number, they win, but if two or more do, the one who picked the harder number to roll is the winner.

The conclusions that I reached from my analysis, and which I think have general application to most games in which players get to choose their level of risk, are as follows:

- It’s never correct in this game for anyone but the last player to choose a very safe (better than 50% shot) number. The reason is that someone choosing afterwards can always take just a slightly bigger gamble (but still better than 50%) and win most of the time. What this means in general for games of this sort is that you don’t want to play it too safe until you know what your opponents are doing – and even then, only play it safe if you feel they’re all likely to fail. Games are not for the risk-averse!
- In a two-player game, you always want to either push just a little harder than your opponent, or else play it as safe as possible if you think he’ll fail. This kind of brinksmanship is intuitive, but the key strategy in most games of this sort would be in determining exactly where that brink lies. In the simple case of a game where the bigger gambler wins if both succeed and you keep going if both fail, you’re shooting for around a 40% chance of success.
- When playing with more than two players, you don’t want to match anyone else’s strategy too closely if others still have a chance to adjust theirs. This is the least intuitive results, as gamers often fall into “groupthink” patterns, wherein everyone plays a similar strategy. But it makes sense when you think about it; if two people are doing the same thing, the third player is effectively playing against a single opponent (albeit one who gets two shots at succeeding), and it’s thus easier for him to pick a winning counter-strategy. If the opponents vary their strategies, it’s hard for the remaining player to find a single counter-strategy that works against both.
- When your opponent gets a chance to react to your strategy, your best move is generally the one which puts him in a position where all choices are equally attractive. When there’s little advantage to choosing one strategy over another, you minimize the advantage of having that choice.

These are interesting conclusions, and intuitively correct once they’re pointed out. The third – about adopting different strategies than your opponents rather than imitating – is the most interesting of them, and will probably merit additional investigation another day.

The first interesting thing to notice is an elaboration on what I said previously, about larger die sizes (and thus a larger range of choices) favoring the player who gets to pick last. When we think about multiplayer games, we can see that the actual concern has to do with the number of choices relative to the number of players; the extreme case would be that in which we have as many players as there are sides on the die. In that case, we know that all numbers will be chosen in the end. Thus, the first player has just as much information as the last, and can therefore choose the best number for himself, meaning that the last player is at the greatest disadvantage.

In the three-player case, it (perhaps surprisingly) turns out that the break-even point is once again that of the standard six-sided die. The first player should choose 4, the second should choose 5 (just as in the two-player game) and the third is now left with no better choice than to pick 1 and hope the other two fail. Thus, the second player has a 1/3 chance of winning outright, the first player will win 1/2 of the 2/3 of the remaining times, thus 1/3 as well… leaving 1/3 for the third player.

However, both the 4- and 5-sided dice grant the first player the greatest advantage and the last player the greatest disadvantage, unlike in the two-player game. The players’ best choices and odds of winning for the 4-sided case are: 3 (41.4%), 2 (31.1%), and 4 (27.5%). For the 5-sided case, they are 4 (39.6%), 3 (35.6%) and 5 (24.8%).

The next interesting question is what the last player’s winning strategies look like. As one might guess, they are qualitatively similar regardless of the die size. Here are tables showing the winning choice for the 6-, 10-, 20- and 40-sided cases. The first two players’ choices run along the two axes, with “1″ being at the lower and left edges and the maximum number being in the top and right.

Green represents the most aggressive choice: picking the number one higher than the higher of the two opponents’ picks. Yellow is the middle strategy, namely picking a number one higher than the lower of the two opponents’ picks. Red represents the most defensive strategy, refusing to gamble at all, picking “1″ and hoping both others fail.

Worth noting is that the boundaries between regions seem to be straight lines (except perhaps for the green-red boundary – it’s hard to tell), and that the red-yellow border is almost, though not quite orthogonal. What this latter fact implies is that there’s a certain maximum gamble that one should be willing to accept, falling around 35-40%. If both opponents pick numbers right around that edge, it’s better to join them in gambling, but if both go much beyond, or if one tries to play “on the edge” and the other chooses the safe route, the last player is better playing it safe himself.

Okay, but what does best play for the other two players look like? You might guess that, given the boundary we just discussed, both players might want to pick a number around there, but this isn’t quite right.

Instead, recall what we said in the case of the two player game, that the point of minimal advantage to the third player is likely to be on the boundary between two strategies. Now that there are three players, however, there are now *three* viable strategies for the last player… we might then guess that the optimal strategy for the other two is to pick numbers that place him right on the junction between all three! This turns out to be pretty much the truth.

If you look at the image up at the top of this post, it shows (for a game with a 20-sided die) the third player’s winning chances for a given combination of choices by the other two; pure red (which doesn’t actually appear, quite) would represent an exactly 1/3 chance for the third player (assuming he makes the right decision), while cooler colors represent better chances of winning. The diagram to the right is similar, but for a 40-sided die.

Looking at the shape, we see that the reddest regions stretch out in spike-like shapes which, intuitively enough, correspond to the boundaries in strategies we found previously. These are marked by black lines on the diagram. The star represents choices of 29 by the first and 23 by the second player, what my program usually gives me as optimal play (though even doing 200,000 iterations, it sometimes moves around a little bit, which illustrates just how close the probabilities are around here). If those are the choices made, the third player should pick 1, and win 38.5% of the time, though he wouldn’t be far off from that picking 24 or 30, either. As it’s easy to see, this is right around the point where a slight shift in choices by the other players could make any of the three strategies the best one.

*NOTE: As “perfect play” doesn’t generally exist for games for more than two players, I should clarify that I solved this for the assumption that each player is working to maximize their own odds (rather than minimizing one specific opponent’s odds) and that the players themselves are likewise making this assumption about one another.*

And that’s about as much analysis as I want to do on this supposedly “simple” game for now. I’ll make one final post tomorrow, summing everything up and discussing how it might apply to other, more complex games, as well as talking about a couple of possible variations on it that would be even trickier to solve.

In the game, each player in turn picks a number, from 1 up to the highest number on whatever die is being used. Then everyone rolls, trying to get their number or higher. Out of those who succeeded, the one who picked the highest number (i.e. who took the biggest risk) wins. If everyone fails, they all reroll until at least one person succeeds.

It’s easy enough to work out some basic results for the two-player version on paper. Yesterday, I posed six questions of increasing difficulty to be answered, whether mathematically or simple guesswork. Here they are again, now with the answers.

**In general, is it better for for the first player if the die has more sides, or fewer?**

More sides. The easiest way to see this is to consider what advantage each player has. The first player’s advantage is having first pick of the numbers; if one number is better than any of the others, he’ll actually be better off than the second. But the second player’s advantage is that of controlling the relative values of the two players’ numbers. The second player’s advantage grows with the number of sides on the die (as he has finer control), while the first player’s diminishes, as the difference in probabilities for one number and its adjacent numbers grows smaller.

**Is there a rule of thumb for perfect play? How should the two players decide which numbers to take?**

Well, things are pretty simple for the second player; either he should choose the next number higher than his opponent, or he should choose 1. There’s clearly no reason to go more than 1 higher than the opponent, as this only diminishes his own odds without added benefit. Likewise, if he chooses to play it safe than the opponent, there’s no reason not to just play it as safely as possible. If the first player chooses wisely, it may not be trivial for the second player to pick which of these two moves to make, but the range of choice is still a lot narrower, so it’s easier to make the right call.

How about for the first player? Well, his goal should probably be to make life as hard as possible for his opponent, putting him in a situation where it’s neither of his two options is much better than the other. It’s easy to see that choices at the two extremes are no good, so the right answer is likely to fall close to the middle of the spectrum. Intuitively, one might guess that it should be the number that gives a 50/50 shot, but this turns out not to be correct, except for dice with few sides.

As you move to dice with more and more sides, you can find a cubic equation that gives you the result that the ideal probability the first player is looking for is 1.5 – sqrt(1.25), or about 0.381966. So, for instance, on a 100-sided die, the number to pick is 62 (which way to round is tougher, and beyond my capabilities, but brute forcing it numerically gives the result that 62 is better than 63 in that situation).

The graph to the right shows the exact probabilities of hitting the numbers ideally chosen by the first player for dice ranging from 3 up to 100 sides.

**Are there any die sizes for which it’s better to pick first? How many, and which one(s)?**

Only one – the three-sided die. There, the first player obviously chooses 2. The second player’s best choice is to gamble for a 3, but this still gives the first player a 57.2% chance of a win, because the difference in likelihood between the 1/3 and 2/3 shot is sufficient to overcome the advantage the second player has of winning when both succeed.

**Are there any die sizes for which it doesn’t matter who picks first? How many, and which one(s)?**

Two, four and six sides. Of these, the interesting one is the six-sided die – and particularly so because it’s the one we use most commonly! In the case of a two-sided die (a coin), it’s obvious that the odds are 50/50 regardless of who picks what. In the case of a four-sided die, the first player picks 3, and like the three-sided die, the odds aren’t good enough for the second player to come out ahead by picking 4… but unlike the three-sided die, he can pick 1 and come out even, since the first player will roll 3-4 exactly 50% of the time.

The case of the six-sided die is interesting because the second player actually has two equal options. The first player should choose 4, thus the second player can again go for 1, counting on the 50% chance of his opponent failing. But instead, the second player can also pick 5. This is slightly less obvious to work out, but has a neat symmetry to it: 1/3 of the time, the second player will hit his 5-6 and win regardless of what the first player rolled. Of the remaining 2/3, the first player will win 1/2 the time by hitting his 4-6, i.e a total of 1/3 of the time as well. The remaining 1/3 of the time, we have to roll again, but since the odds are balanced on each roll, we don’t have to work out the whole series to know that the total odds come out to 50/50.

**In general, is it better for the first player if the die has an odd number of sides, or an even number?**

This was an inadvertent trick question, based on flawed assumptions I’d made at the time of the last post. Looking at the numbers, there seems to be no pattern to whether a given die will be better or worse for one player depending on whether its sides are odd or even. To the right is a graph of the second player’s odds of winning, for 10- to 30-sided dice. Unlike the previous graph, we can see that the zig-zags do not follow a regular pattern.

**Assuming perfect play, what’s the limit to how big an advantage the second player can have (for any die size)?**

For this, we have to go back to the second question, and our realization that the best strategy for the first player is to leave the second with two options that are as close to equivalent as possible.

As the number of sides grows huge, the difference in the probability of rolling at least some number M and the probability of rolling at least M+1 becomes very small. Thus, if the second player chooses to take the number one higher than the first player, it becomes as if both are rolling for the same target, except that the second player wins if they both get it. If you think about it for a moment, you realize that this is exactly the same as a game in which they take turns rolling, and the first player to hit the number wins, with the second player having the advantage of the first roll. (That is, if the second player’s first roll hits, he wins, and the first player’s roll doesn’t count. If he misses, though, then we look at the first player’s first roll. If that hits, then the second player doesn’t get a second roll, and so on.)

So, we write the probability of the second player winning recursively as:

W = p + (1-p)^2*W

Where p is the probability of a single player hitting his target number, (1-p)^2 is the probability of both missing, and W is the total probability of winning (which appears on the right side of the equation because it remains unchanged if both win).

Now, obviously, if the second player chooses 1 instead, then his chances of winning are just (1-p), since he wins if the first player misses, and loses otherwise. Since we want to make it so that the second player can’t gain any greater advantage whichever he chooses, we can then set W = (1-p), giving us:

(1-p) = p + (1-p)^3

Which is the cubic equation I mentioned earlier, and whose solution within the range 0 < p < 1 is about 0.381966. Since the second players' odds are 1 minus this, as the number of sides on the die gets larger and larger, the second player's odds converge on 0.618034 which, using betting odds, means he's a 1.62-1 favorite over the first player.

Now, I promised some results for a multiplayer version of the game, but everything turned out to be much more complicated than I’d thought. I have the results, but this post is already very long, so I will again keep you hanging until tomorrow. As a sneak preview, though, here is a graphic I created, showing the odds of winning for the third player’s best choice, given the first and second player’s choices in a game with a 20-sided die. Bluer colors represent better chances for the third player; pure red would be exactly 1/3 chance, but for this size of die, one of the results I found is that the third player ALWAYS has an advantage, regardless of how the other two attempt to conspire against him.

After writing my last post, about how risk-reward decisions are affected by a game in which the goal is achieving an all-time high score, I got to thinking about more general cases of risk-reward decision-making in games, and how that is, like these Grime dice, a non-transitive thing. If you have the opportunity to see what kinds of risks your opponents are taking, you’re usually going to want to gamble either just a little bit bigger, so as to come out slightly ahead if you both succeed, or – if you feel your opponent’s strategy is too high-risk, play as safely as possible and count on them failing.

Having been reminded of this by the Grime dice, I decided to invent an extremely minimalist dice game to take a closer look at this idea in the abstract.

Here’s how the game works:

We play with a single N-sided die. N could be any integer, and as we’ll see, there are actually qualitative differences in the game depending on what kind of die we choose. We could also play with two players, or more.

First, we determine a player order randomly. Then, in turn, we each have to pick a number, ranging between 1 and N. Everyone has to pick a different number; it is forbidden to take the same number as a previous player. Once we’ve all picked a number, everyone rolls the die, trying to roll at least as high as the number they picked. If everyone rolls under their number, we roll again, repeating as needed until at least one person succeeds. Out of those who succeeded, whoever picked the highest number wins.

*Example: Playing with a 6-sided die, Alice picks 5 and rolls 2, Ben picks 2 and rolls 6, and Chris picks 4 and rolls 4. Chris wins.*

I’ll have to write a little computer program to investigate what the probabilities look like with more than two players, but first I worked out the math for two players on my own, and found some interesting things. But rather than just spill the beans right away, it might be more fun if I allow you the chance to either work out the results yourself, or test your intuition by taking a guess.

In ascending order of difficulty (both mathematical and intuitive), here are the questions:

- In general, is it better for for the first player if the die has more sides, or fewer?
- Is there a rule of thumb for perfect play? How should the two players decide which number to take?
- Are there any die sizes for which it’s better to pick first? How many, and which one(s)?
- Are there any die sizes for which it doesn’t matter who picks first? How many, and which one(s)?
- In general, is it better for the first player if the die has an odd number of sides, or an even number?
- Assuming perfect play, what’s the limit to how big an advantage the second player can have (for any die size)?

The answers will be given tomorrow, along with whatever results I come up with for the multiplayer case.

This is another old cryptic of mine. A couple of the clues weren’t quite to my liking anymore, so I redid them, but there were some pretty good ones that didn’t need changing! Some of the clues are harder than others, but the grid is so open that any word you have trouble with should solve itself by checked letters eventually.

]]>The trouble is that, when shooting for a score as high as I need to be #1 again, I find myself being forced to adopt strategies that aren’t nearly as much fun as the ones I was employing when I first started out. Whereas consistency is usually and intuitively a desirable trait in a game player, the nature of competing for high scores encourages exactly the reverse.

What I realize now is that there’s an additional problem with big group games that I failed to mention in my last post and that it’s really what’s going on here, because when the goal is a high score, a seemingly single-player game is actually more like an *infinity-player* game! Let me explain.

Whether or not a game contains a luck component (and most high-score based games do), the performance of a player’s brain on a given day is itself a form of luck, and a player’s results will always form a statistical distribution of some sort. Some games you do better, some games you do worse, but you have an average, and you have a standard deviation. Note that the distribution is not necessarily a bell curve, depending on the nature of the game, but let’s pretend it is to simplify the discussion.

If you’re consistent at being good at a game – which is what we’d normally call a good player, whether in chess, or poker, or soccer – what that means is that your average is high and your deviation is low; you can beat most people most of the time, and you won’t throw too many games away against weaker opponents by committing errors.

But this sort of performance actually works against you in a high score competition, whether you’re competing against others, or only against your own previous scores. Either way, your average is, by definition, lower than your highest scores; so instead of that variance in your performance being a hindrance against weaker players, it is actually your greatest asset in striving to achieve that one miracle game to add a new notch to the measuring post.

In other words, most games ask us to be consistent about being good. High scores games, rather, ask us to be *good at being inconsistent.* There’s no penalty for a failed gamble, except having to start another game, so the fundamental strategy in achieving high scores is to look for ways to make the gambles with the highest possible payoffs, regardless of the odds of actually winning them.

The reason this is similar to a large group game has to do with the number of separate results you’re trying to beat. In a one-on-one game, you only need to beat one other player’s score, so it’s clear that maximizing your average performance is key. The bigger the group gets, the more separate people you need to beat, so unless you’re much better than everyone, you need to start gambling a bit, unless your goal is to consistently end up in second or third. In a high score game, though, you’re trying to beat *every score ever posted in the history of the game, including your own*. Thus, if 100 people have each played 100 times, in shooting for the high score, you’re trying to have the highest score out of 10,000 results. This is equivalent to trying to win a normal competitive game with *10,000 players*! That’s why you *really* have to gamble big or go home.

Back to Deck Buster, the basic gameplay here is that you’re assigning cards one at a time to one of three hands you’re building, receiving video poker style payouts in the form of additional cards. The nature of the game is that full houses are your safest bet to shoot for, because if you miss, you can still score a two-pair or three-of-a-kind, and will occasionally luck into a four-of-a-kind instead. But you receive a large bonus once you’ve created every type of hand at least once… meaning that to shoot for a high score, you need to get a royal flush and a straight flush at some point.

Flushes, particularly straight flushes, are risky to shoot for, because they mean you’ll have no pairs in your hand. If you fail to get them, then, you’ll score nothing at all (or only a pair). Fail a couple of times, and you’re doomed to end the game quickly, with a miserable score. If you were trying to perform well on average, then, you’d focus on the high-probability hands and only go for the big ones if you already had four of the five cards needed. Instead, the quest for high scores means the optimal thing is to shoot for your royal flush from the get-go, any time you’ve got the slightest shot at it, and simply abandon the game and start over if you fail to get it. This is boring, frustrating and grindy, but it’s unfortunately the best way to shoot for a big score.

One basic solution to this problem is to limit the payout from any given opportunity, and ensure that consistency will afford a player a greater number of those opportunities. Everyone knows, for instance, that in Tetris, one wants to create a deep, single-column well to drop the straight pieces into for four-line scores. There’s a certain gamble involved here, as the necessary piece may not come up, but a consistent player will nonetheless be able to construct the appropriate setup more effectively, as well as deal with the situation when the game refuses to give her a straight line piece for a while.

Another common way is to have scores ramp up as the game continues, so that bigger and riskier opportunities to gamble only come along later in. That way, the player is more likely to face a real choice. If he’s already invested 20 minutes into a given session and has been performing above his average, it’s not clear that taking a long-shot gamble is the best option… especially since surviving a little while longer would give the player a shot at an even bigger and better gamble later. This is what Yoku-Gami does, and the reason that it’s a much better game than Deck Buster.

**Related**: The challenges of big-group games