I also need to dig myself out of a bit of trouble, because I have been confusing two separate issues. The first is the proportional representation of probability, the second is the how to represent difficulty on a meaningful scale, which I am saying should also be proportional.

**First the probability:**The probability issue is clearly defined. On the additive side the Uniform distribution is the ultimate example. It has a limited range and you can get from probability zero to one in a finite series of steps (but not infinitely small steps!). When we make a graph of the cumulative probability the uniform distribution forms a

*straight line*.

On the proportional side the best example is the logistic distribution, which I had intentionally left out for simplicity. It has an infinite range (negative to positive infinity), but when you go left or right on the scale you never quite get to probability zero or one (though it may be arbitrarily close). When we graph odds on the logarithmic scale (Log Odds, or "LO") they form a straight line. I have redone my graphs from Part 2 to include the logistic distribution. You can see that the logistic PDF looks very different from all the others, but in the CDF and Log-Odds charts it looks very much like the Laplace distribution. This is perhaps deceptive, because the logistic distribution has

**very**heavy tails. (Click to see a larger image).

There is an additional issue here because there is no obvious reference probability; 0 and 1 are good reference probabilities for the uniform, but that doesn't work for distribution with an infinite range. I have arbitrarily chosen 0.5 as the reference probability for these graphs, which occurs at

**Z=0**.

It will help to have an example to think through this; imagine that you have a set of dice that will generate random numbers from each of these distributions. For the uniform and triangular (2d6, 2d10, 2dX) this is very familiar; any single die is uniform, and any pair of dice is a triangular distribution. For the normal, Laplace, and Logistic distributions we need to imagine we have some "magic dice" that will do what we need. These would be very unusual dice indeed, but it is helpful to compare them to the behavior of 2dX dice we know. The Normal distribution dice will be most like the 2dX dice. The Laplace dice will tend to roll very close to the average most of the time, but will occasionally roll very high or very low. The Logistic dice will tend to roll farther away from the average more than any other dice, and will generate a relatively more extremely high or low rolls.

**Now the difficulty:**The X-axis on the charts is standardized to a

*common scale of difficulty*, the

**Z**of the standard normal distribution. Think of your dice again, and imagine you are rolling with a "+1" modifier (-1 if you like). On the Z-scale a "+1" standard deviation, or one common unit of variability, is the

*same for all of these dice*.

One more graph - This is the same zoom-in chart from Part 2, but I have annotated it for discussion (except no logistic distribution here). Between log-odds of -1 to +1 is about 45% of the total probability of all these distributions. That means nearly half of the rolls of your dice will be within this range. Within this limited range the cumulative probabilities for these distributions are very similar. The uniform and Laplace distribution are practically on top on each other here, though the shape of these two distribution (see PDF chart) could hardly be more different (The logistic distribution would be very close to these). Likewise for the 2dX and normal distributions; these are barely distinguishable within this range. Although these distributions might be very fact different, the differences in the cumulative probabilities only matter at the high and low ends, not in the middle.

It has been a long haul, but I can finally (FINALLY!) start discussing why I think we don't notice the difference between additive and proportional probability.

- As I demonstrated above, for the middle range of difficulty there isn't much difference in the cumulative probabilities, and one distribution might do about as well as any other. Games tend to emphasize tasks of medium difficulty because they are interesting - It's not much fun to play a game where you are trying to do something that is practically impossible or incredibly easy. On one hand hundreds of rolls might be needed to succeed, and on the other success is not a challenge. Good games avoid this by keeping difficulty in the the middle where the chance of success or failure is
*interesting*. - In games it is common for the dice rolls for success to be identical in similar situations. This is not the case in reality; In the real world things are constantly changing, and many tasks are never exactly the same twice. It seems likely that we perceive the average difficulty of many tasks, which may mask the proportional relationship. There is a mathematical question here about the average difficulty of tasks and whether this means that the normal distribution better represents how we perceive difficulty. It seems possible, but I don't know how to justify it mathematically.
- Oops? Meaning maybe my example that lead me to the Laplace distribution as a motivating example is wrong. There is a simplifying assumption I made in that example that may not be quite right, and I'll have to work it through again to examine it carefully. There are a lot of things I didn't define very carefully that might come back to bite me here, but it's a blog, not a textbook, and I think my main points are essentially correct.

**A closing thought:**What does it mean to measure difficulty on a scale from negative to positive infinity? There probably are some tasks that are too easy to fail, or too difficult to ever succeed, yet on an infinite scale there is always some probability of each. This seems to border on a philosophical question, but if I work at it perhaps I can pin it down better than I have so far.

In Part 4, I have a game versus reality example to show you. Stay tuned!

Related:

The Endeavour: Sums of uniform random values