THE RESEARCH TRIANGLE. SORT OF.

,

As a student of education policy, my foremost personal goal is to keep my teacher sensibility and credibility from permanently draining away. Reading research reports, with their inevitable earnest calls for still more research, authoritative policy claims and suggestions, one understands that the base of the research-policy-practice triangle is practice, the apex being the conjunction of research and policy. Practice is a kind of collection pan (or audition stage) for ideas generated by scholarly investigation and clever policy-making.

Teachers serve as both subjects and objects of reform, but—with the exception of classroom-based action research, a phrase that causes your average doctoral student’s lip to curl—seldom go out looking to gather data or propose policy on a wider scale. They wait for the next finding, prescription or mandate to come down the pike, then either wrap the new guidelines around their old practice or attempt to ignore scholarship and policy altogether.

Some see this as evidence of a regrettable autonomy still present—despite the best efforts of education publishers—in the act of teaching. Others (people who work in schools, mainly) see these habits as a defense mechanism. No matter how many studies are conducted, no matter how large the data sets and innovative the statistical modeling, no matter how muscular the policy lever—kids keep coming to school, and teachers have no choice but to keep their heads down and teach them, somehow.

There’s been a little dustup over another one of Jay Greene’s papers, just released by the Manhattan Institute: Building on the Basics: The Impact of High-Stakes Testing on Student Proficiency in Low- Stakes Subjects. Greene’s shabby scholarship was roundly criticized about six months ago, when he and colleague Catherine Shock counted course titles in university catalogues, and developed a math-to-multiculturalism ratio, proving that Ed schools and teachers didn’t give a rip about mathematics achievement. Greene took some heat for that, eventually re-characterizing the data as “an amusement.”

This time, Greene and co-authors Marcus Winters and Julie Trivitt investigate the question of whether narrowing curriculum to put greater emphasis on two tested subjects (math and reading) in Florida schools might have a negative impact on student achievement in other subjects. Their answer? No.

Eduwonkette and other bloggers have raised lots of questions about technical aspects of the study’s research design, and the fact that the report was embargoed, not subject to traditional peer review, before hitting cyberspace; these are the kinds of inquiries that won’t be satisfied until long after the titular core message (“building on the basics” is a good and justifiable thing) has become conventional wisdom. Sherman Dorn cranked out an engaging essay on “the reworking of intellectual credibility in the internet age (which) will involve judgments of status as well as intellectual merit.” And an “Anonymous Peer Review” poster on the Ed Week blog set up, point by point, technical issues of concern in the piece, including some that are obvious even to research lightweights. For example, the study uses two years of math and reading achievement data and one year of test scores in science. That’s right—one year. I’m no statistician—but isn’t it hard to do growth comparisons with only one set of numbers?

Some tidbits from the report:

We find that students attending schools designated as failing in the prior year made greater gains on the state’s science exam than they would have done if their school had not received the F sanction.

There are two important reasons that we might expect schools deemed to be failing to respond positively. Those that have received an F grade for the first time may be shamed into improving their performance. Those that have received at least one failing grade may decide to raise their performance because they fear attrition of their student body.

Though there is some disagreement about which aspect of the accountability policy was effective (the threat of vouchers or the shame of an F grade), each of these analyses found that the policy improved the math and reading proficiency of students in public schools designated as failing.

While the hard-core researchers duke it out, let me step aside here and think like a teacher. Greene and his colleagues, through the Manhattan Institute (“turning intellect into influence”) have released a study strongly suggesting several things, some of which will be appealing to Florida legislators:

Don’t worry too much about schools cutting back on science or other academic subjects to meet math and literacy targets, because it doesn’t really matter, in the long run. Science scores are likely to go up, statistically, if math and reading scores go up—and that’s good enough for us.

Schools can “decide” to raise their performance after being shamed and threatened.

The policy of giving schools failing grades improves their reading and math proficiency. And now we have evidence that it improves all subjects, whether we spend time teaching them or not. Failure, therefore, is a great motivator.

Sanctions work, and are much less expensive than investing in improved instruction, engaging curriculum or retaining effective teachers.

Florida is a state with nearly 11,000 National Board Certified Teachers. The National Council on Teacher Quality, commenting acerbically on the recent—generally positive—National Research Council report on National Board Certification, said this:

Teachers from advantaged schools and states with financial incentives were more likely to participate in the certification process. Board-certified teachers are more likely to remain in the field than other teachers, and are more likely to move to assignments in high-performing schools with lower rates of poverty.

When policy and research “decide” that perhaps a robust science program really isn’t a necessity in a failing school, where are the Board-certified teachers going to migrate? Where might an accomplished science teacher seek refuge from the trickle-down effects of policy and research, along the bottom of the triangle?