Monday, February 07, 2011

How the Fraser Institute Gets it Wrong with School Rankings

This was a good line from Adrian Dix:
"Fantasy hockey pool guidebooks use more rigorous assessment and criteria than the Fraser Institute," Dix declared in a statement.

He's referring to school rankings supplied by the libertarian, right-leaning Fraser Institute, a think tank favouring private solutions and free markets. But it made me curious what analysis the Institute actually does to reach its widely cited rankings. So I took a look.

The rankings are most often criticized because they are based (partly) on the Foundational Skills Assessment (FSA), a standardized assessment widely derided by educators. Here's Jane Friesen of SFU:
"It’s simply telling you how is a particular cohort of students in a school doing in a particular year. I think we have to be careful to not interpret those results as a measure of the effectiveness of the school and I think that’s where the real issue comes in,” she said.

Friesen said there are a number of factors that the rankings don't take into account, such as students' backgrounds.

In other words, the test was never intended to be used to directly compare schools, even though that's precisely how the Fraser Institute uses them. But then I dug into the actual Fraser Institute 2010 report card for B.C. and the Yukon and found that the problems go well beyond the use of the FSA.

The Fraser Institute rankings actually depend on 7 factors, only 3 of which involve the FSA. The other four are Math 10 gender gap, English 10 gender gap, graduation rate and delayed advancement rate. Combining these factors with various weights (graduation rate and delayed advancement rate accounting for fully 25% of the final score), the Fraser Institute scores schools on a 0-10 scale and ranks them "best" to "worst."

You may have spotted the problem already. Given the factors I just mentioned, which schools would you expect to do worst and best? Unsurprisingly, the rankings favor private schools that cater to the wealthy and well-to-do. Social context is completely ignored; schools are directly compared with one another despite some schools facing much more of a challenge regarding students coming from poverty and neglect and facing myriad obstacles not often seen by private school attendees. Should we conclude that West Point Grey Academy in Vancouver is really a better school than a public school in Prince George because it has higher graduation rates and better test scores? That these outcomes reflect superior teachers and better curriculum? Of course not; the challenges faced are worlds apart. We shouldn't be surprised that these 7 factors give a higher score to wealthy private schools. We should be surprised that the Fraser Institute uses those scores to rank schools as being better or worse.

To put it another way, you shouldn't look at these rankings and think "wow, I should put my kids into West Point Grey Academy." You should look at them and think "wow, I should have a wealthy family with few social problems."

I would hope that people's skepticism would be aroused even before inspecting the Fraser's Institute's dodgy use of statistics. After all, what are the chances that a right-leaning libertarian think-tank in B.C. and a right-leaning, libertarian think-tank in Washington State would use the exact same ranking scheme and that the scheme just happens to favor schools advocated by right-leaning libertarians.

Sadly, it gets worse. Many people would look at these stats and say that we should pay teachers according to how well their students do on such rankings. I think most people are sympathetic to the idea of rewarding good teachers, but punishing teachers who are faced with troubled students and real-world problems just makes the problem infinitely worse.

Finally, we should probably be skeptical of any ranking that gives a perfect score to the Bountiful Elementary-Secondary school.

7 comments:

Sixth Estate said...

The elementary school rankings are entirely FSA-based, which causes further problems. The Institute claims to incorporate socioeconomic background now, but it's mostly sloppy. The bottom line is that people with a background in marketing are a bad source for advanced statistical analysis (or anything claiming to be advanced statistical analysis, I should say).

I wrote my own critique of that brand-new report here:

http://sixthestate.net/?p=674

Gabriel said...

Thanks for the link. You're right, they do look at the socioeconomic factor but it's not part of the 0-10 score. It's reported as a separate "expected vs. actual" score. And since they admit that high income tends to correlate with high school ratings, it's strange that they would not normalize the actual score.

Sixth Estate said...

Even that socioeconomic figure is suspicious, though, as I say in my piece. They claim that it "only" accounts for 20% of a school's performance, although they don't explain how they arrived at that figure. And they don't really do socioeconomic analysis -- all they do is identify an average parental income. Which itself is suspect for a number of reasons, since they're tangling together census data and school performance data somehow without explaining how. In fairness to them, given the limited data they seem to be working with, even if they had put it into the scoring I probably wouldn't have trusted them to do it fairly.

I automatically suspect sleight of hand when supposedly intelligent authors don't explain what they're doing (although to be fair, none of them are teachers and only one of them, the economist, might have a background in statistics). The fact that Bountiful came out on top and that all but one private school is supposedly outperforming by its socioeconomic metric makes me skeptical, to say the least.

Gabriel said...

The number is actually 30%, which I presume they are getting from the regression analysis. They admit in the report card that high income is associated with high school ratings.

I would like a lot more information about their regression analysis. They claimed to have looked at several "home characteristics" and found that only parental income had a significant association with school rating. What were the other characteristics? How much can you actually glean about students' backgrounds from publicly available data?

Gabriel said...

From what I gather, the predicted score is based on the regression analysis using only parental income as a predictor variable (excluding other factors). They seem to assume that all differences not attributable to parental income are attributable to good teaching, which is not a sound conclusion. There are numerous factors they ignore.

One example is special needs, which they actually try to incorporate. The problem is that, from my initial read-through, the vast majority of private schools are "n/a" for special needs while public schools are between 8% and 12% special needs. I am betting that the private schools have much lower rates of special needs students. This is just one example of a factor that has a huge impact and is not being sufficiently accounted for in the available data and analysis.

Sixth Estate said...

p. 10 of the 2011 report says that 20% of the variance is due to parental income.

In terms of what information can be gleaned, that was my question too. I'm honestly not sure because, again, they don't tell us. I frankly doubt the schools make available enrollment lists based on census dissemination areas, so there's at least one stage of combination going on within the Institute that they don't explain.

This, and the mysterious regression analysis, is what I mean by sleight of hand. They could easily have printed up their analysis instead of just sharing a few vague conclusions with us. When they don't do that, I get suspicious about what they DID do.

Special needs (and ESL, and French immersion) are "incorporated" in the sense that the numbers are given. They're not incorporated in the sense that they're tied to the data in any way, as parental income is, despite the fact that you'd expect both special needs and ESL to have a statistically significant impact on test results. And the special needs category still doesn't capture the full number of at risk children, which I imagine is exponentially higher in the public system.

The only explanation I can think of for the many "n/a" is that private schools don't have to follow the same reporting requirements for special needs students. My guess, like yours, is that they are much lower.

Gabriel said...

Hmm, the 20% and 30% is a discrepancy between the 2010 and 2011 reports then. That's interesting. I got 30% from the June 2010 report.

One easy experiment would be to look at the correlation between special needs percentages and overall ranking for schools which do have special needs data available (i.e. the public schools). If there is a significant relationship that would be interesting. Of course, the percentages of special needs within just public schools might be in a relatively narrow range compared with private schools (we don't know since it's not reported for private schools).