Does having more models in later issues make them greater models?
It seems, as indicated by the bolded sentence, that the straight-forward application of a numerical value based on ranking demonstrates the bias you're trying to disprove.
But I would argue against the merits of your system, as assigning numerical values based on ranking only works if you assume the rankings are objective in the first place. They're not. In your system, Mcpherson has 50 points and Decker has 44. Does Decker have 88% the greatness as Macpherson as an SI model? If you disagree (as I'm sure most would), this method falls apart. If you applied values to objective criteria - say 10 points for a cover and five for a non-cover appearance - these rankings would likely be very different. Plus, assigning values based on rankings is really only a way of comparing rankings relative to one another, not analyzing a model's worthiness of inclusion on the list in the first place, which is what I and a lot of other people would argue is the problem with having so many newer models who have yet to establish legacies with SI.
This is your ultimate method of analysis, but it's irrelevant, IMO. The over-representation of recent models - which is the problem - in the bottom half of the rankings will naturally bring down the average of the most recent decade's models. It doesn't prove that their rankings or inclusion are justified. On top of that, the only reason this over-representation of recent minor models balances out to a seemingly appropriate average is because the more recent covermodels are all ranked suspiciously high.
Personally, I still feel SS's point about a third of the listed models having appeared in the issue within the last five years is valid.