3

. Sunday
  • Agregar a Technorati
  • Agregar a Del.icio.us
  • Agregar a DiggIt!
  • Agregar a Yahoo!
  • Agregar a Google
  • Agregar a Meneame
  • Agregar a Furl
  • Agregar a Reddit
  • Agregar a Magnolia
  • Agregar a Blinklist
  • Agregar a Blogmarks

Losing to Win
Ivars Peterson

If you feel that life always deals you a tough hand, take heart. Scientists have proved that two games guaranteed to give a player a steady string of losses can generate a sure-fire winning streak if played alternately.

This counter-intuitive behaviour of games of chance was discovered recently by physicist Juan Parrondo of the Universidad Complutense de Madrid in Spain, and has become known as Parrondo's paradox. Physicists have had an interest in simple games ever since the mathematician John von Neumann developed so-called game theory in the late 1920s. He showed that certain kinds of game involving bluffing, like some card games, have optimal strategies that guarantee the player the best outcome. Von Neumann's work on games turned out to be applicable to some situations in economics, social behaviour, and ecology.

Gregory Harmer and Derek Abbott of the University of Adelaide, Australia, now demonstrate in Nature1 the bizarre consequences of Parrondo's paradox, with reference to two 'loser's' games. In Game A, a player gambles on a simple coin-tossing process in which the coin is loaded to guarantee that the probability of winning is less than 1 in 2. A win might, for example, correspond to an outcome of 'heads', whereas the coin is designed to fall with slightly greater probability as 'tails'. The player is then sure to suffer losses roughly proportional to the number of times the game is played. Game B in Harmer and Abbott's scenario is more complicated, involving two biased coins.

Which of them is tossed depends on how much money the player has. One of the coins gives a good probability of winning; but the game is set up so that the other coin, which usually gives a loss, is tossed more often. So the player is sure to lose out in the long term. The researchers show that, as expected, each game played repeatedly generates a steady decrease in the player's capital. But what if the two games are alternated? You'd expect the player to be no better off. But it turns out that two rounds of Game A followed by two of Game B actually produce a steadily increasing capital. What is more, the same is true when the games are switched at random.

How can this be possible? Switching between the two games, explain the researchers, creates a ratchet-like accumulation of wins. Winning rounds, mainly thanks to the 'good' coin in Game B, carry the player's capital 'uphill'. Swapping to the other game then 'traps' the winnings there before subsequent repetitions of the same game can introduce the otherwise inevitable decline.

The researchers propose that Parrondo's paradox may operate in economics or social dynamics to extract benefits from ostensibly detrimental situations. For example, they suggest that if a society or an ecosystem suffers from declines in either the birth rate or the death rate, declines in both together might combine with favourable consequences. Researchers have demonstrated that two games of chance, each guaranteed to give a player a predominance of losses in the long term, can add up to a winning outcome if the player alternates randomly between the two games.
Alternating between the games produces a ratchetlike effect. Imagine an uphill slope with its steepness related to a coin's bias. Winning means moving uphill. In the single-coin game, the slope is smooth, and in the two-coin game, the slope has a sawtooth profile. Going from one game to the other is like switching between smooth and sawtooth profiles. In effect, any winnings that happen to come along are trapped by the switch to the other game before subsequent repetitions of the original game can contribute to the otherwise inevitable decline.
A mathematical analysis of the games using discrete-time Markov chains.

So how can two losing games, when combined randomly produce a winning game. We can show a mathematical analysis that establishes this paradox. By establishing the conditions for recurrence of the corresponding discrete-time Markov chain (DTMC) it can be show that probabilities can be chosen such that games A and B lose when played individually, but win when played randomly. (The DTMC analysis was provided by Peter Taylor, Dept. of Appl. Maths.)
Here, just a summary of the analysis is presented, a more detailed description is available in the relevant papers.
The player wins a single round of game A with probability p and loses with probability 1-p. The analysis for game A is quite elementary, and the result which accords with our intuition is that we lose if



(1)
Now let us turn to game B. Here the probability that the player wins a single round depends on the value of their current capital. If the capital is a multiple of M, the probability of winning is p1, whereas if the current capital is not a multiple of M, the probability of winning is p2. The corresponding losing probabilities are 1-p1 and 1-p2 respectively. The analysis for game B is that we lose if


(2)
Now consider the situation where the player plays game A with probability g and game B with probability 1 - g (g for gamma). If our capital is a multiple of M the probability of that we win the randomised game is q1=gp + (1-g) p1, whereas if our capital is not a multiple of M the probability that we win is q2=gp + (1-g) p2. The probabilities of losing are 1 - q1 and 1 - q2 respectively. We observe that this is identical to game B except that the probabilities have changed. It follows from (2) that the randomised game is winning if



(3)
Thus, the existence of the paradox of Parrondo's games will be established if we can find parameters p, p1, p2, g and M for which the three above equations are satisfied. For example p = 5/11, p1 = 1/121, p2 = 10/11, g = 1/2 and M = 3 then we have 6/5 > 1, 6/5 > 1 and 217/300 < name="gameb">Game B is losing -- A common fault
If we consider game B prima facie, we may intuitively guess from a statistical point of view that coin 2 is played, on average, 1/3 of the time, and coin 3 the remaining 2/3 of the time. (Since we are working on capital modulo three). Under these assumptions, the probability of winning game B is 1/3*(0.1-0.005) + 2/3*(0.75-0.005) = 0.5283, which is greater than 0.5, thus the game is winning. Wrong.
Figure 3b. Three state Markov chain, formed by y = capital mod 3.
In this scenario we have only considered the games from a statistical point of view, and not correctly as discrete-time Markov chains. Hence, our guess for the equilibrium distribution of y = x mod 3 in game B is not correct.
It is easy to see that y is the state of a three state Markov chain with transition probabilities given by p(1,2) = p(2,0) = 3/4, p(1,0) = p(2,1) = 1/4, p(0,2) = 9/10 and p(0,1) = 1/10. See Figure 3b.




Using discrete-time Markov chain (DTMC) theory, the equilibrium distribution of this chain is (pi_0, pi_1, pi_2) = (0.3846, 0.1538, 0.4615) when epsilon = 0. The calculation for winning game B then reduces to 0.5, which means that game B is fair.
When a bias of epsilon = 0.005 is included the equilibrium distribution changes slightly to (pi_0, pi_1, pi_2) = (0.3836, 0.1543, 0.4621). The probability of winning game B then reduces to [0.3836]*(0.095) + [0.1543 + 0.4621]*(0.745) = 0.4957. So, in fact, game B is definitely losing when epsilon = 0.005. If you are unfamiliar with Markov chain analysis, you can satisfy yourself of these results via computer simulation of game B.

0 comments: