Wrong yet again.

Awesome response. I'm not talking about "coddling" or "how one plays, or ought to play the game." I'm talking about just the underlying math. "Stark," used here, is a nice word for "too simplistic to plausibly model the world we are interested in talking about." The probabilities in a D&D game are too difficult b/c of the breadth, more accurately the complication (it's reasonably easy to solve many games where the strategy space is infinite, as in the real number line), of the strategy space to get an appropriate sense of them except in simple scenarios. People can try and run through scenarios, and maybe that gives you some traction, but at the level of a general statement you're not going to get at it.

[/begin actual game theory]

Consider chess, an example I've already posted about at least once before. See, I know w/out any doubt there is a Nash equilibrium to chess. That is, given what the other player is doing there is a "right" way of playing it, given the goal of winning, etc. This is a fact, given by the definition of chess and by the definition of Nash equilibrium. However, we have no idea of what it is. It's just too computationally complex, which is saying something given modern computer power. Now, that's a game w/ no chance (no dice rolling), identical moves available to the each player (white and black have the same pieces, same exact beginning configuration), and well-defined victory conditions.

Once you add in chance, and the incredibly large number of options available in something like Magic: the Gathering, not to mention D&D, it becomes incredibly hard just to intuit what will happen over a large set of interactions. Moreover, D&D characters change very frequently, through leveling, treasure, etc. And, none of this is accounting for human error. At that point then we have to include things like "trembling hand" refinements, which make any general conclusions kind of hand-wavy. Actually, once we are in what is essentially an infinitely-repeated game, any set of interactions that is large and lacks a predetermined stopping point, we usually have to resort to folk theorems, so we're already on shaky ground.

It is true that in a

*very* simple game, one in which say everyone makes just straight attack rolls, saves, and damage rolls, you could maybe get some traction on things. But, once you add in things like Celerity, Diamond Mind counters, and complicated beliefs about what one's opponents can do, as well as the highly tactical nature of 3.5 D&D where 1 square on a battlemap can make an actual difference, the probabilities become prohibitively hard to estimate.

[/end actual game theory]

[/begin actual probability calculus]

Now, here's the thing. If it's a straight iterative probability, then it's all very easy. If the probability that Team Player loses is non-zero, then they

*will* lose. Eventually. Guarantee it. Even if it is very very small. This is Sunic's point about iterative probability. The only reason to play, in such a world, is I suppose out of idle curiosity as to when that's going to happen. Although if I have any sense of what the probability of loss function is, then I don't even need to do that. I can make a very educated guess as to when it's going to happen.

It's not straight probability, though. There's a strategic interaction, updating, etc.

[/end actual probability calculus]

I'm writing this mostly b/c the misuse has irked me. I do not expect this to persuade Sunic, he seems impervious to such things, but I wanted to bring it to light. I have my own feelings about why we actually bother playing the game and the appropriate ends of character optimization, but I think I've rambled enough and have work waiting for me.