close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Are the Trump polls wrong again? Why experts are worried about “breeding”
aecifo

Are the Trump polls wrong again? Why experts are worried about “breeding”

Early voting models are interesting but potentially misleading.
Photo: Yasuyoshi Chiba/AFP/Getty Images

Less than a week after the deal was reached, polls show a incredibly close race between Donald Trump And Kamala Harrisboth nationally and in the seven battleground states. Lately, it’s been hard to find polls showing anything else. And this has led to suspicions that, as often happens in the home stretch, pollsters are “gathering,” that is, bringing their numbers as close as possible to those of other pollsters, such as Nick Silver put it:

Silver has focused on breeding for some time. When he was still running FiveThirtyEight, he explained why some pollsters come together:

Another complication is “breeding“, or the tendency for polls to produce results very similar to other polls, particularly near the end of a campaign. A methodologically inferior pollster can show superficially good results by manipulating his polls to match those of the most powerful polling companies. If left to his own devices – without stronger polls to guide him – he might not succeed. When we examined Senate polls from 2006 to 2013we found that methodologically poor pollsters improve their accuracy by about 2 points when there are also strong polls in the field.

In other words, no one wants the embarrassment of releasing a final pre-election poll that turns out to be a total exception. This may be why some of the worst outliers are actually produced by “stronger polling companies” that don’t care about their reputation for accuracy, including the New York Polling Company. Times-Siena outfit that Silver calls honest with his data. Times-Siena invited Joe Biden nine points among likely voters in his last poll in mid-October 2020 (Biden won the national popular vote by 4.5%). Worse still, Times-Siena held a poll four years ago at the end of October showing Biden leading Wisconsin by 11 pointscritical condition he actually won by 0.7 percent.

This very recent phenomenon raises a philosophical question: If polls turn out to be misleading, is the biggest culprit the high-quality pollster who publishes an aberrant survey or the low-quality pollsters who “herd” in the same direction? It’s hard to say. An underlying question is how does “clustering” occur, assuming pollsters don’t just look around and force their numbers to mimic everyone else’s? Earlier this week, political scientist Josh Clinton explain how the decisions all pollsters must make regarding their samples can significantly alter their results without any obvious alteration of the results:

Once survey data is collected, pollsters must evaluate whether they need to adjust or “weight” data to address the very real possibility that people who participated in the survey differed from those who did not. This involves answering four questions:

1. Do respondents match the electorate demographically in terms of gender, age, education, race, etc.? ? (It was a problem in 2016.)

2. Do the respondents match the electorate politically after the sample was adjusted for demographic factors? (It was the problem in 2020.)

3. Which respondents will vote?

4. Should the investigator trust the data?

Clinton goes on to demonstrate that the way pollsters answer these questions can produce as much eight point variation in horse racing results. The fact that polls for the 2024 general election don’t actually show huge swings is probably the best evidence that they are answering these questions as a “herd”, even if they don’t put their thumb on the scale in favor of Trump or Harris.

We can’t know if the herd was right or wrong before the election, but in the pollsters’ defense, they mostly worked hard to address problems which produced big errors in state polls in 2016 and in national and state polls in 2020. (They were, after all, very precise in 2022.) Yet, as Clinton argues in a newer piecethe error could be the same this year and be shared more systematically:

The fact that so many swing state polls show similar margins is a problem because it raises the question of whether the polls are tied in these races because of voters or pollsters. Will 2024 be as close as 2020 because our politics are stable, or will 2024 polls only resemble 2020 results because of decisions made by state pollsters? The fact that the polls appear closer than one would expect in a perfect polling world raises serious questions about the second scenario.

Leaving the polls aside for a moment, anxious pundits and supporters of both presidential candidates are rightly looking for signs that a close election would break one way or the other at the last minute. Some in both camps are obsessed with fool’s gold of early voting data; Given the enormous imponderables associated with determining the identities of these people over time and whether their “banked” votes would have been cast later, you can use early voting to “prove” what that you want. Others are obsessed with subjective cues of “enthusiasm“, which may matter, but only to the extent that it extends beyond the certainty of voting and is contagious (a “unenthusiastic” vote counts in exactly the same way as a “unenthusiastic” vote counts enthusiastic “). A more relevant factor is the scope and effectiveness of last minute announcements And voter mobilization effortsbut the former tend to cancel each other out and the latter are generally too submerged to be weighed with any degree of certainty.

Finally, some observers highlight late trends linked to the objective situation of the country, in particular to the improvement of macroeconomic data. There are two problems with this approach: first, perceptions of the economy tend to be cooked well before election dayand, second, voters’ current perceptions of all sorts of phenomena have relatively little to do with objective evidence. Against all evidence, a large portion of the electorate believes that the economy is horrible and getting worse, that we are in the midst of a national crime wave, and that millions of undocumented immigrants are flooding into communities across the country. heart of the country to commit crimes and vote illegally. This is not an environment in which many voters anxiously look at statistics to see how America is doing.

If the polls turn out to be significantly wrong, we will almost certainly see a wave of post-election ignorance in which angry or frustrated people argue that we should reject all objective indicators of how an election is going and instead rely to vibrations, “instinctive impulses” and our own prejudices. I hope that doesn’t happen. Imperfect as they are, polls (and, for that matter, economic indicators and crime or immigration statistics) are far better than relying on cynical partisan hype, manipulation and misinformation, all of which tend to be self-perpetuating when given. faith. And as we already know – and perhaps we will be reminded on November 5 – there is only a small step between rejecting the polls and rejecting the actual election results. And so there will be another January 6th, or something worse.