User talk:Phol ende wodan/Combat System Overhaul Draft 1

From NetHackWiki
Jump to navigation Jump to search

Just a comment on the generation of samples from a normal distribution for use in the to-hit calculation, following up on a short conversation on IRC. Some methods of doing this:

Use floating point

For example, by the wikipedia:Marsaglia polar method, or see links in that article for a couple of other methods. This is simple, but might have a couple of drawbacks.

First, apparently there is some opposition to using floating point numbers inside NetHack, on the grounds that different platforms implement them differently, so the results could vary between platforms. I'd be inclined to say "so what?" if the differences are minor and confined to pseudo-random results, but I'm not on the dev team. Also, I think there are already platform-dependent differences in how NetHack computes random numbers. Looking at Source:NetHack 3.6.0/src/rnd.c, everything seems to be based on the Rand() function, which is ultimately provided by the operating system, and intended to return a random integer in the range 0 to RAND_MAX inclusive, for some OS-dependent value RAND_MAX. On my Linux desktop, RAND_MAX is 2147483647, but I suspect on some of the old platforms supported by NetHack, it might be as low as 32767. This will affect the distribution of the return values from rn2 and the other functions in rnd.c.

Second, this depends on having a way to generate samples from a uniformly-distributed real-valued distribution. For example, as the common drand48 function does. Do the older platforms supported by NetHack provide functions similar to drand48? If not, it might be necessary to include such a function in NetHack. But this is probably available somewhere.

Table-driven

The to-hit calculation seems to take the form sample > bonus, where sample is a random sample from a Gaussian distribution, where the distribution is fixed into the source code, and bonus is the sum of all the applicable bonuses, and has the form K/160, for some integer K.

If we assume that the sample will never be more than 10 standard deviations away from the mean (the probability of exceeding this is around 10^{-23}), then a table showing the probability of a hit for each K would have about 1600 * stddev entries. It should be feasible to precompute this table and insert it in the NetHack source code. Then the to-hit calculation can be done with just a single sample from a uniform distribution.

But how to represent the probabilities in the table?

They could just be floating point numbers. This has the same considerations as in the previous method, but is more efficient.

Or we could pick some large integer N, approximate the probability as a rational number k/N, and store k in the table. Then the to-hit check becomes just rn2(N+1) > k. But an improved version of rn2 should be used. With the existing rn2, the value of RAND_MAX from my Linux desktop, and N = 10^9, small results will be 50% more likely than large results. If RAND_MAX is 32767, the results will be horrible.

If great precision is required, we could take N=10^{64}, store k as four 16-bit unsigned integers, and use one to four calls to rn2(65536) in the to-hit calculation. But that's probably overkill.

Add uniform distributions

As a crude but very simple method, one could approximate the normal distribution as a sum of several independent samples from a uniform distribution. Thus instead of checking gaussian-sample > K/160, one might check something like 8d160 > (K - 8*161/2) * c, where c is a constant chosen so as to get suitable results, and maybe other numbers should be used instead of 8 and 160. For this purpose, the existing d() function in rnd.c might be quite usable, as long as the number 160 isn't replaced by a much larger number.

--Truculent (talk) 00:33, 8 November 2017 (UTC)

Overall impression

I find the idea of considering evasiveness and damage reduction separately interesting, but I have to say that I don't like this proposal in its current state. Maybe I'm missing something, but the proposal seems to make most non-shield armor completely useless.

As an extreme example, consider plate mail. Wearing plate would cause a large weight burden, and significantly increase the chances of being hit. And for what benefit? At most one point of damage reduction. Why would anyone ever wear plate?

I'm guessing that under this proposal, the only armor worth wearing would be:

  • Shields.
  • Anything with an armor penalty of 0.
  • Armor with a significant magical benefit, such as gauntlets of power.

Here's a sketch of a system I think might work better:

  • Damage can be reduced to 0. A fox biting plate armor shouldn't do any damage.
  • The average damage reduction is quite a bit higher than in this proposal. Maybe it should be a percentage of the attack damage, not a fixed amount.
  • Damage reduction is variable, not fixed. Including...
  • There is a chance, maybe only a small chance, of no damage reduction at all. A fox biting into a spot not covered by armor does its ordinary 1d3 damage.

--Truculent (talk) 22:10, 8 November 2017 (UTC)