Education's Journalism Problem

Scantron!

Breaking Journalism

The American Journalism Review has just published a searing condemnation by Washington Post contributor Paul Farhi of the state of education journalism, much of which, it contends, reinforces a narrative that the U.S. school system is failing -- a narrative supported by "self-styled education reformers," but refuted by the experiences of many parents asked to rate their local schools. I've railed against this before in the context of tech blogs' treatment of education, and while the AJR piece doesn't address technology specifically, I would argue that the narratives of failing schools certainly fuel much of the growing business of ed-tech.

Farhi contends that journalists simply aren't doing the legwork necessary to write good, critical stories. Instead, he argues, they're parroting the "ed reform" movement's version of the story -- not questioning the press releases, policies or narratives that they're handed by the likes of the politicians, philanthropic organizations, and corporations. Part of my criticisms of tech blogging certainly involves a similar issue: uncritical parroting of "buzz," churnalism, copy-and-pasting of press releases, and "parachute journalism."

Farhi says there's a lack of "due diligence" on the part of reporters, who have a starry-eyed fascination with Bill Gates alongside an inability to walk into the classroom or talk to many educators (thanks to both policy proscriptions and schools' unwillingness to communicate with the press). But I think there are other issues at stake here too, least of which is the fact that journalism is a rapidly changing industry, one where "Old Media" is feeling increasingly squeezed and where -- in the brave new online world -- pageviews drive the product and often the storyline. There's an incredible amount of "diligence" that goes into addressing the latter, and so I'm never surprised to hear fear and failure touted. Well, that and the miracle, magical cure of technology…

Cheating Journalism

Last weekend, the Atlanta Journal-Constitution published a story in which it claimed to have found "suspicious test scores" on standardized tests throughout the country -- where, for example, "test results for entire grades of students jumped two, three or more times the amount expected in one year."

While the third paragraph of the story states that "the analysis doesn't prove cheating," that's certainly the spin that the newspaper put on its findings -- the headline invokes cheating, natch -- claiming that the test scores it examined followed the patterns of those found in the Atlanta School District which has suffered from its own massive cheating scandal, the largest in the history of the K-12 public education system.

The AJC, which first broke that scandal, has now turned its attention to the rest of the country. It examined scores from some 69,000 public schools and found "high concentrations of suspect math or reading scores in school systems from coast to coast." According to the paper, test scores were flagged as suspicious in part when they varied year over year: "In nine states, scores careened so unpredictably that the odds of such dramatic shifts occurring without an intervention such as tampering were worse than one in 10 billion."

The story then proceeds to talk about why we might be seeing such widespread cheating (not why we're seeing anomalies, I should note, but why we're seeing cheating): namely there's "way too much pressure." That is, we live in an environment of high stakes testing that rate schools and now individual teachers on students' performance on standardized tests. It's Campbell's Law writ large:

"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

Not surprisingly, perhaps, there's been some pushback on the AJC's story, much of it from local officials whose schools were flagged as "suspicious." Cleveland Schools CEO Eric Gordon quipped, "In all candor, if I thought we had a widespread cheating problem in the district, I would expect our achievement to look quite a bit better than it does." But many of the implicated districts have responded with a lot more severity, noting that while they take these accusations of cheating seriously, they are also concerned about flaws in the methodology of the AJC story.

And it's the question of methodology that seems to go to the heart of this story. Professor Gary Miron, an education professor at Western Michigan University, penned the highest profile rebuttal to this notion of a nationwide cheating scandal. His op-ed appearing in the Washington Post, challenged both the methodology and the conclusions of the AJC story. Miron was one of the academics who consulted on the USA Today's investigation last year into testing irregularities (a story that led in turn to a federal Department of Education investigation into the testing practices in DC schools under former chancellor Michelle Rhee). He said he was shown some of the assessment data that the AJC was using in its research: "My review, however, yielded serious concerns about the data used, the methods of analysis employed, and the conclusions drawn."

Among Miron's concerns: the AJC investigation did not look at individual students' scores, just at schools' data. As such, it did not account for student mobility and could not ensure that it was comparing the same students when it looks at cohorts' performance year-over-year. The AJC investigation also did not look at erasure data, one of the key ways in which anomalies in the scores can be linked to cheating. The AJC study did not account for the demographic characteristics in the schools -- this matters in cases of a high ESL population, for example, in which students make "uncharacteristically" large gains in test scores as their English improves.

In the week since the initial AJC story, the newspaper has (mostly) stood by its investigation, dismissing Miron and others' questions. It has however, following a very thorough response from the Nashville Metro Schools about the methodology, removed at least one Nashville school from its list of suspects and from its story.

Data Journalism

Much like the recent Teacher Data Reports issued by New York City and chronicled by its local newspapers, the AJC's investigation into standardized testing marks an interesting moment for the nascent sub-field of data journalism, one that purports to take interesting datasets, glean insights and tell powerful stories. And certainly applying the tools of data science to school-related data could uncover some very interesting stories -- stories we may or may not expect.

Despite making a gesture towards digging into the data, I think the AJC is fairly far off in what I'd hope to see with the practices of good data journalism. The overwhelmingly negative response to the story from researchers and statisticians is certainly a warning sign that something's "off" here with what it's done to make sense out of nationwide testing data.

Part of what constitutes "good data journalism" certainly involves your research methodology standing up to scrutiny, and to that end, it's imperative that journalists are open about their methods and their data. In all fairness to the paper, it does offer in its online version a sidebar where you can find more information about the process. This includes a blurb about the research methodology (although it's not necessarily helpful if you have little idea why or why not a linear regression model is a good or bad approach). The AJC doesn't crack open the datasets for onlookers either, although it does provide a way for readers to look up schools to see if they're on "the list." The latter is particularly reminiscent of stories in The New York Times and LA TImes that let readers look up the names and ranks of teachers. It's this sort of thing that makes data punitive rather than productive when it comes to discussions about education in the U.S.

That feels like a very different approach to data journalism than that taken by The Guardian, arguably the most forward-thinking and forward-working data journalistic outlets today. For one thing, The Guardian frequently publishes the datasets that inform its stories, giving other data scientists the chance to "run the numbers," if you will, and see what else they can come up with.

While sure, there are privacy concerns when it comes to doing this with educational data, the flip side of keeping this data behind closed doors -- only accessible and interpretable by certain experts, politicians, and journalists -- is that we are losing what should be a public resource.

Failing Journalism

No doubt the AJC did uncover widespread "irregularities" in its research of test scores nationwide. But it jumped to a conclusion that these irregularities mean widespread cheating. Could one not just as easily raise questions about these irregularities as an indication that standardized tests are inconclusive and unreliable ways to measure student progress (let alone, as in the case in more and more districts, a measure of teacher performance)? Is perhaps the problem not that schools are cheating (or as the AJC put it "cheating our children"), but that the testing regime itself? Or are we so caught up in one particular story -- a story of failure and cheating -- that we can't see clearly?

And we've come full circle here to the American Journalism Review's claims about education journalism: the AJC story contains the narrative that the national media loves to highlight, even if local districts and local experiences are quick to point otherwise. No doubt, it's eye-opening to read the blow-by-blow timeline of how the AJC story was researched, questioned by scholars and still marketed to other news organizations (Here's the press release the AJC and its PR agency issued on the eve of the story going to press). This was a story that feels crafted to fuel the fire of education alarmism. There should be no surprise then that the AJC editor is so proud of his team that worked on this story because by all the measurements -- pageviews and hype -- it was a wildly successful story.

Photo credits: MforMarcus



Tags: , , ,