by Rienzilla » 07 Apr 2013, 13:13
What about a different approach?
The goal of the CPU speed benchmark is to prevent games going at a slow simspeed, because some players have a pc that can't handle the size of game they're playing, right? Why don't we measure exactly what we want to know?
I would periodically measure the amount of units in a game, and the actual max simspeed for a player during every game, and store that in the database. You can normalize that that into a performance number or curve (I haven't extensively thought about a good normalization, but examples: 12000 units with +10 simspeed is 100/100, 1 unit with -10 simspeed is 0/100. Or a curve: simspeed at 100, 500, 1000, 2000, 4000 etc, which you can extrapolate and then integrate to get a meaningful number). Then, remember just the last 10 or 100 measurements for every unit count and average that, so that a players that buys a new pc, or disables his seti@home or whatever eats his processing power, improves his cpu rating. Maybe we can even somehow get a unique pc-identifier so a user that plays on different pc's is also tracked accurately.
So, approximately
- During game: measure and store tuples [unitcount, maxsimspeed] in database for unitcount = 1; unitcount < 12000; unitcount+=100) (Zep, is this possible as far as you know?)
To calculate actual cpu preformance index:
- Server side: average the last 10? maxsimspeed values for every unitcount from the previous for loop. Results in a set of tuples S = [unitcount, avgmaxsimspeed]
and then either:
- fit a curve C over set S
- calculate integral I over unitcount = 0 to 12.000 of C
or: publish 3 numbers for small (~200), medium (~1000), and large (~4000) units.
--
Rien