Politics and Facts in PolitiFact Ratings: A Reply to Vanity Fair

June 7, 2013

Press Release
June 7, 2013
Contact: Katy Davis

On May 28, 2013, CMPA released findings from a study of Politifact’s ratings of political claims made by Republican and Democratic sources. On May 29 the study was criticized in an article by Vanity Fair (VF) writer Kurt Eichenwald.

The VF critique is a very useful contribution to the debate over factchecking in political journalism. Its misunderstanding of the study’s goals, methods, and findings nicely illustrate some of the pitfalls journalists encounter in dealing with empirical research in general and media research in particular. So we decided to address its criticisms here in some detail.

Vanity Fair headlined its article, “The Flawed, Statistically Silly New Study That Calls the Republican Party More Dishonest.” The article’s lead succinctly states the study’s putative takeaway: “A new study out today proclaims that the Republican Party is much more dishonest than the Democratic Party.” This is followed by criticisms of the CMPA study for “using phrases like, ‘Republicans lie more,’ and ‘Republicans are less trustworthy than Democrats.’” The body of the article further argues that the study’s conclusions aren’t valid, because its judgment of “truthfulness” is based on a small sample of the subjective opinions of PolitiFact.

But is this what the study actually purported to find? To determine whether VF’s account accurately depicts the study, we begin by comparing CMPA’s actual statements of its findings with Vanity Fair’s versions: Not ‘Republicans lie more,’ (VF); but “Media Fact-Checker Says Republicans Lie More” (CMPA); Not ‘Republicans are less trustworthy than Democrats.’ (VF) but “A leading media fact-checking organization rates Republicans as less trustworthy than Democrats.”(CMPA) [emphasis added]

In other words, CMPA’s study did not try to determine whether one party lies more than the other, based on PolitiFact ratings. It addressed the much narrower question of whether PolitiFact criticized one party more than the other, by aggregating and comparing PolitiFact’s own ratings of claims made by each party’s representatives and supporters (and leaving aside the question of whether their claims or PolitiFact’s ratings are accurate).

This renders irrelevant most of VF’s subsequent criticisms, which are aimed at establishing that PolitiFact ratings are not objective and comprehensive enough to establish the truth of which party lies the most — for example, “the group’s determinations are not objectively accepted fact.” (This is a point on which both CMPA and PolitiFact director Bill Adair – in a published response to the study – entirely agree). However, VF concludes that if this study can’t tell us whether one party lies more than the other, then it’s of little value.

Along the way, VF also raises a statistical issue that illustrates a common misunderstanding of the concept “statistical significance.” In questioning the value of CMPA’s analysis of 100 PolitiFact ratings over a four month period, VF states, “The basis for such a selection is supposed to be what is called ‘statistical significance,’ and there is nothing that would indicate that 100 statements – filtered through … PolitiFact’s subjective judgments — constitutes [sic] anything of any statistical value.”

In fact, “statistical significance” has nothing to do with the objectivity or substantive significance of a finding or the sheer number of cases studied. It refers to the probability that findings based on a sample of cases are representative of the larger group (called the universe) to which the sample belongs. For example, how well does an attitude survey of 1000 people represent the attitudes of the many millions of people from whom the sample is drawn?

The concept of statistical significance is simply irrelevant to the CMPA study, because it is based not on a sample but on the entire universe of ratings during the specified time period. We included all 100 ratings of claims associated with either party during the first four months of Obama’s second term. VF’s misuse of this concept apparently harks back to its original misconception that CMPA’s data are supposed to be representative of the thousands of political assertions that are made every day.

More generally, VF questions the usefulness of findings based on a relatively small number of cases in a relatively brief period of time. As the release made clear, however, this is only the first report from an ongoing study of fact-checking that will track changes over time, as well as adding additional fact checking outlets. The release also linked to CMPA’s previous study of PolitiFact ratings during the 2012 general election, which found almost identical differences in ratings of the two parties. An earlier University of Minnesota study of PolitiFact ratings throughout 2010 produced the same pattern, suggesting that these findings are in fact robust.

Now let us return to VF’s assumption that there is little point in addressing the question of how journalists evaluate the truth of assertions by politicians, unless we know whether the claims are in fact true. This opens out into a more general question — are patterns of media coverage worth identifying empirically, even if we can’t verify their relation to reality?

In particular, fact checking has become a highly visible and influential genre of political journalism, which emerged as the media’s signature contribution to campaign discourse during the 2012 presidential election. So it’s worth examining the picture of reality that leading expositors of this approach produce with their critiques of political claims and assertions.

PolitiFact is arguably the nation’s leading media fact checker –its work has been awarded a Pulitzer Prize, its findings are frequently cited in the media, and it has spawned other like-minded enterprises. So, for example, if regular followers of PolitiFact’s ratings come to perceive Republicans as the party that lies more, they may become more skeptical of future claims by Republican politicians and of the party’s veracity more generally.

Whether this outcome is desirable depends on your own political predispositions and views of reality. Either way, it is certainly consequential, and therefore well worthy of serious study. In short, PolitiFact’s ratings of political reality cannot help but create their own political reality. And understanding the mediated reality of politics is central to understanding politics itself.

The Center for Media and Public Affairs is a non-profit, non-partisan research organization, which is affiliated with George Mason University. It has monitored news coverage of every presidential election and every new administration since 1988. For CMPA findings on the Obama administration see: cmpa.com

One response to “Politics and Facts in PolitiFact Ratings: A Reply to Vanity Fair”

  1. […] one release about PolitiFact.com and also a number of follow up articles including a reply to Vanity Fair’s take on their PolitiFact analysis. There is also an articles that compares and contrasts PolitiFact to the Washington Post’s […]

Leave a Reply to Fact Check.org Part I | Sic Logica Cancel reply

Your email address will not be published. Required fields are marked *