Search This Blog

Monday, March 19, 2012

Facts Are Not Always What They Seem

You know the old story, "figures don't lie, but liars figure". Well, here is an interesting post on why some figure comparisons are not what they seem. From Thoughts on Education Policy:

Thoughts on "Educational Productivity"

Last week, Matthew Ladner produced a stunning chart showing an "implosion" in our nation's educational productivity.  Productivity here seems to be defined as ratio of per-pupil expenditures on public education to average NAEP test scores.  The former has tripled since 1970 while the latter has essentially remained flat for the upper grades.  I'm not sure of the impetus behind that particular post, but the Bush Center has written a similar post that makes all the same mistakes.

Before I delve into those mistakes, I'll point out that they've also created a nifty website that allows people to compare students' standardized test scores in any district's to the scores of other students in the state, nation, and world.*

So, what's wrong with comparing spending to achievement?  Seems straightforward.  And the graph is certainly compelling.  But, alas, the statistics that seem the most straightforward are often the least useful.  Among other issues:

1.) Spending and test scores are on different scales.  Spending can multiply almost infinitely while the test scores have a ceiling.  In the chart on the site, the average 17 year-old scored 306 out of 500 on the NAEP math test in 2008.  Which means that even if every kid in the country earned a perfect score the next time around, the average score would only increase about 63%.  Since school spending has tripled, the ratio of spending to achievement would still be far greater now than it was 40 years ago.

2.) Why would we assume that it takes the same level of effort for a school to get a student to earn a certain score now as it did in 1970.  A zillion factors other than education spending influence achievement levels.  If parenting ability, economic circumstances, living conditions, and such increased dramatically then we shouldn't think it's miraculous if scores increase with no additional school spending.  Similarly, if societal conditions worsen in some way, then it would necessarily mean that more effort is necessary to achieve the same scores.  I have no idea whether it's now harder or easier to get the average 17 year-old to score 306 on the NAEP now than it was in 1970, but we'd need to know that answer to accurately measure educational productivity.

3.) Why would we assume that the same level of spending is commensurate with the same level of effort on behalf of districts now as it was in 1970?  The economic and social context of schooling is dramatically different.  Perhaps most importantly, the number of women in the workforce -- particularly in fields outside of education -- has exploded.  Simple economics dictates that it must cost more to buy the services of an equally qualified teacher.

4.) Test scores were not the sole goal of that increased spending.  Surely, we also aimed to increase the number of high school and college graduates (I don't have the HS stats handy, but almost twice as many 25-29 year olds have bachelor's degrees now as did in 1970 (though it's increased by only about a third since 1975)).  I think a reasonable argument could be made that it's increasingly costly to get each additional student to graduate (i.e. moving the HS graduation rate from 50 to 60% is easier than moving it from 80 to 90%), so we might not expect the same returns per dollar on those measures.  And, surely, we also aimed to improve many other skills (e.g. critical thinking, physical/emotional health, social skills, art appreciation, etc.) that aren't measured by the math and reading tests listed.

So, should Ladner's alarming chart worry us?  I wouldn't dismiss it out of hand.  I wouldn't be the least bit shocked if our returns to effort and spending have decreased the past 40 years.  But we can't tell whether productivity has decreased, remained steady, or increased by looking at that chart.  The most compelling figures he presents are the large increases in non-teacher staff in schools, but some unknown number of support staff are certainly invaluable so even that doesn't prove all that much.

And, by the way, those same four problems apply to any international comparison of a simple spending : test score ratio.  Were we to completely eliminate schools, culture, society, and a myriad of other contextual factors would still produce kids who scored much higher and lower on tests in different countries; it would be harder and easier and cheaper and more expensive to change that in different countries; and each country emphasizes different outcomes to a different degree.

I'll be the last person to argue that our nation's schools are just fine -- we face countless problems with a nearly infinite number of solutions -- so please don't interpret my criticism as an argument for the status quo. I hate the status quo.  But also realize that Ladner's chart gives us exactly zero information about what ails our schools.



*The "world" here is 25 developed nations.  On another note, here's a fun game: they don't seem to have compiled the list of the top-performing districts, so go see which ones you think might rank highest.  So far, I've found:

-The average student in Chappaqua, NY outscores 89% of students internationally in reading and 82% in math
-The average student in Chatham, NJ outscores 88% of students internationally in reading
-The average student in Brookline, MA outscores 77% in math.

No comments:

Blog Archive

Edutopia

Subscribe Now: Feed Icon