Each framework is scored by two separate measures, and these are simply averaged. The two measures are:
Since these two measures of popularity are on different scales the final scores normalized to a scale of 0-100. The scores are on a log scale since the measures cover such a large range, so for instance a framework with a score of 90 for Stack Overflow may have thousands of questions while a framework with a score of 10-20 might just have a handful.
Since scores are simply calculated as an average of the two measures described in the previous question, you can investigate the individual scores as compared to other frameworks to see why your framework places where it does. With Github scores for instance, you can look at how many stars your framework has and compare it to frameworks with higher scores. If you still think something is wrong with the score after doing some investigation, please let us know.
I'd love to hear about it. Just send a suggestion.