Why News Organizations Should Stop Sponsoring Polls
Doing so would simply produce fewer bad stories.
Welcome to Second Rough Draft, a newsletter about journalism in our time, how it (especially its business) is evolving, and the challenges it faces.
After what seemed like half a year of merciful inattention to public opinion polls, the political press is back to writing about them almost constantly. The occasion is that President Biden’s approval ratings have declined, likely in response to the avalanche of problematic press coverage of the messy Afghanistan withdrawal, and the seeming endlessness of the pandemic. (A bit of context: Biden’s approval is still just ahead of the highest level Trump achieved after his presidency’s first week.)
But this column is not about Biden’s poll problem—it’s about the news media’s poll problem. It also isn’t about how most of the polls have been wrong in recent elections, but about how a few of them are always wrong, and what that should mean for news organizations.
As I write this, Five Thirty Eight (my preferred source)[1] offers no fewer than 24 different polls of Biden approval ratings taken entirely during the first fifteen days of this month. Five of these are of likely voters, which means they include all sorts of implicit predictions on the part of the pollsters about who will vote in the 2022 or 2024 elections—in the wake of a terrible industry track record in making such predictions in both 2016 and 2020. The range of outcomes for these five polls is a relatively (suspiciously?[2]) tight four points for Biden’s approval rating. Six of the 24 polls are ostensibly of registered voters, which require some assumptions, but far less heroic ones; the range of these two polls is seven points, two points higher and one lower than the likely voter range.
Finally, 13 of the polls are of all adults. Because very good data exists about the demographic shape of this universe, these polls may be less predictive of elections, but they should actually be more accurate. Yet, the range of these polls is 11 points, with two claiming that Biden’s performance is seen positively by a robust 50% of his constituents, while another puts it at an anemic 39%. If we eliminate the two highest and two lowest results, the range of the nine remaining polls is a more plausibly tight 5%, 44-49%.
How do we get such a wide range of results in such a tight timeframe? You get a hint if you read the fine print of poll results, the print even finer than that which reports the margin of error. Somewhere down there you’ll see that the results are accurate, within X percentage points either way (that’s the margin of error) “at a 95% confidence level.” So if Biden has a 46% approval rating with a margin of error of 3%, that means the real rating, if we asked everyone, is 95% likely to be between 43-49%, which is informative.[3] But it also means that that there is a five percent chance it’s not. Put another, simpler, and much more important way, it is a mathematical fact that one in 20 such polls is just plain wrong.
Why does any of this matter? If news organizations only reported poll averages, as anyone who studies public opinion will tell you they should, it wouldn’t. But that’s not what they do.
Instead, news organizations frequently make two mistakes in reporting on polling. One of these could easily be prevented.
New trend or just an outlier?
The less preventable error is to seize on “dramatic” results from individual polls, which usually means polls that cut against the conventional wisdom of the moment. It’s true that such polls may be early signs of a shifting tide, but they can also be simply aberrant. I know, however, from long experience, that the only antidote here is sober, skeptical reporters and editors. (The same phenomenon applies in reporting on “dramatic” new scientific studies, especially in the crux of crises like the pandemic.)
The preventable error is what I wanted to address this week. It comes when news organizations report on single polls they themselves sponsor. In the pre-digital world, this may have made sense, as it was otherwise hard to get timely access to the sometimes-interesting “internals” of polls, the cross-tabulations of questions with other questions, including demographic splits. But that’s now widely and quickly accessible with many polls.
“It may be wrong, but it’s ours”
Proprietary polls may also have once made sense from a branding perspective, but it’s hard to imagine it still does. Polling doesn’t feel like it adds much luster to anything nowadays. Perhaps there is still some ego boost for political reporters and editors who have early access to their “own” polls, but that hardly serves readers.
And the damage from a news organization sponsoring a poll should, at this point, be dawning on you: What happens when you make a big deal of the one-in-twenty polls that the math tells you will be simply incorrect?
This risk isn’t just theoretical. Biden’s best recent poll came from CNN; that hardly covers the newsroom in glory, and while CNN did not lead its poll story with the Biden numbers (perhaps it was itself suspicious of them), it also never seems to have told viewers its poll was an outlier.
What is to be done? I think the time is past due for news organizations to stop sponsoring polls.[4] The result might be less coverage of polling altogether, which wouldn’t be a great loss. Beyond that, it would almost certainly push horserace coverage toward focusing on polling averages, where it belongs, and to greater caution in trying to separate the harbingers from the “permanent exclusives.” That would be a win, for readers, voters and trust in news itself.
[1] Nate Silver sometimes drives me crazy these days when he plays armchair epidemiologist, but his political analysis, while not always correct, always strikes me as relatively unbiased, and his methodology as unerringly so. RealClearPolitics, unfortunately, is both less transparent and frequently seems to be trying to put a right thumb on the scale of its data.
[3] Not believing this is true for a reputable poll of adults means you simply don’t believe in math. That’s your prerogative, I guess, but if you find yourself in this bucket (“I don’t believe in polls”), you might try to get a bit less annoyed at the people who don’t believe in science.
[4] I am not proposing that this extend to exit polls. While also recently problematic, these can be enormously valuable, and might not be readily or at least promptly available absent news organization funding.
The trouble with independence is that it subverts the business model, which doesn’t care about accuracy only that there’s a ROI. Great read!
Needed to be said. I would just point out, however, that polls may be seen less as journalism than marketing for the news brand