Quote:
curtadams said:
You could optimize by averaging multiple "dry runs" or changing the level of damage that justifies switching to a new target (morale break, serious injury, or death are all reasonable candidates). The programming is not particularly difficult. Although the computer demands are fairly high (at least doubling the time for each battle) in Dom�s batched turn-processing mode that's not a serious concern.
|
No, no, no... the computational demands are O(n/(log(n))) (relative to a deterministic method) no matter how many runs you choose... and therefore identical
Seriously, though, I've considered this before, and wholly endorse it. I'd love to have turn generation take 10x as long if my generals would be 10% less stupid. But, it is not done at present...
[Note: if you are unfamiliar with "Big O" notation, please skip to the VERY bottom.]
...from where did I pull O(n^2/(log(n)))? Keep in mind when reading this that constants are ignored in Big O math, so curtadams's claim of "at least double time" is quite correct in the real world. That said, if n is the number of squads per side (or total; it does not affect the Big O math, as there are 2 (a constant number of) sides per battle):
Deterministic (current) method:
1) Make a list of enemy squads: O(n)
2) Copy this list and give it to a friendly squad: O(n)
3) Assign each enemy squad a priority based on a function of size, relevance to the friendly squad's orders (e.g. "Fire Large Monsters"), etc: O(n)
4) Calculate distance to each enemy squad on the list, and divide its priority by its distance: O(n)
5) Sort (descending) resultant list: O(n(log(n)))
6) Target element 0: O(1)
7) Fire: O(x) [once per member, but members per squad is limited, so assumed constant, or O(1)]
8) Repeat for each ranged squad: multiply by n [in practice it will be less than n]
Result: (O(n)+O(n)+O(n)+O(n)+O(n(log(n))+O(1)+O(1))*n=n*n* log(n)
Dry-Run-Simulation (Monte-Carlo) method:
1) Make a list of enemy squads: O(n)
2) Copy this list and give it to a friendly squad: O(n)
3) Sort (ascending) by distance: O(n(log(n))) [note: 3 and 4 do not change the big O complexity, but drastically reduce real-world complexity]
4) Eliminate out-of-range targets: O(log(n)), using binary search
5) Assign each enemy squad a priority based on a function of size, relevance to the friendly squad's orders (e.g. "Fire Large Monsters"), distance, etc: O(n)
6) Sort (descending) by priority: O(n(log(n))) [again, this does not alter theoretical complexity]
7) run Monte Carlo simulation ('dry run'): O(r*x*n), where x is squad size, which is limited by game mechanics to around 25-50, and thus can be abstracted as a constant. "r" is the number of runs, so the ultimate complexity is O(r*n*n) .
7a) calculate value of each simulation: O(x), where x is the number of firing troops. Each squad, of course, has a finite number of units due to commander leadership limits, as mentioned before... so this becomes O(1).
7b) sort (ascending) by cumulative simulated value: O(n(log(n)))
7c) cull worst targets, so they are not included in the next simulation: O(n) [the way I would do it; though O(log(n)) is also possible]
8) Target element 0: O(1)
9) Fire: O(1) [see 7a for reason]
10) Repeat for each ranged squad: multiply by n [in practice it will be less than n]
Result: (O(n)+O(n)+O(n(log(n))+O(log(n))+O(n)+O(n(log(n))+ O(r*n)*O(1)+O(n(log(n))+O(1)+O(1))*n=r*n*n*log(n)
However, it is important to note that (depending on the heuristic employed in steps 7.x) the computational complexity can be reduced
So, (r*n*n*log(n))/(n*n*log(n))=r. But assuming r (the number of simulations) is finite and bounded, it becomes a constant, yielding a differential of O(1). In other words... on an infinite battlefield in Dominions II, with an arbitrarily large number of squads, the relative complexity of simulation versus deterministic, rule-based target-choosing asymptotically converges on 1. In other words, the amount of time it takes to calculate the result of a huge battle (with n squads) is (if calculated as efficiently as possible, and if I did not make any mistakes) directly proportional to (n^2)*(log(n)). Were the suggested dry run (Monte Carlo) simulation to be added, the amount of time would be directly proportional to r*(n^2)*(log(n)), the number of simulations. If 'r' was a constant, then the asymptotic complexity would remain unchanged when moving to the more accurate Monte Carlo simulation method... so every battle would take 'r' times longer to compute, and quadrupling the number of units in a battle would generally increase the calculation time by a factor of 32 in either system. Therefore, the quality of tactical battle AI can vastly increased [by roughly a factor of log(r), where "r" is the time multiplier; which I will not prove this here, but is known a property of statistical simulations] without changing the relationship between speed and battle size.
Please note that these calculations are for RANGED UNITS ONLY, not melee, which cannot be 'squadded' since they do their targeting individually (to some extent).
*** Big O Notation Briefing ***
Also known as "Asymptotic Complexity". Consider the function y=a+b*x+c*x^2+d*x^3. At x=0, the only relevant factor is 'a', since everything else is zero. Now consider the situation where a=-4096, b=1024, b=-128, c=1. At x=0, y=-4096; it is dominated by term 1 (-4096) since the other terms are all zero. But at x=32, it is dominated by term 2 (-128*32^2) since (term 1 = -4096), (term 2 = 32768) < (term 3 = -131072) > (term 4 = 32768). And when x>128, term 3 has a greater magnitude than other two terms, so it dominates. In any case, no matter how large the multiplier is on other terms, only one will dominate when x (the relevant input) reaches sufficient size, and it is always the one with the highest exponent (in a polynomial expression).
Big O notation uses this fact to ignore all factors that will become irrelevant at very large values of "x" or "n" (the normal term for the input-size variable). Since the worst case is most important to the developer (e.g., it is better to have 10 battles of 10 seconds each than 9 battles of .01 seconds each and one battle that takes 5 hours), Big O notation is utterly critical to software design, even though it discards information that seems relevant.