Andrew Gelman's U.S. Election Model Is Tracking the Presidential Candidates Kamala Harris and Donald Trump Chances of Winning

In the lead up to this year’s presidential election, Andrew Gelman, a professor of political science and statistics, collaborated with Ben Goodrich, an instructor in the political science department, to develop The Economist’s election tracker, which aims to to predict the outcome of the U.S. presidential election state-by-state and nationally.

As of the day of publication, the tracker puts Vice President Kamala Harris’s share of the intended vote at 50.1% and former President Donald Trump’s at 45.6%. Harris is estimated as having a roughly 3 in 5 chance of winning.

Columbia News spoke with Gelman about the election tracker and some of the many other projects he has in the works this fall.

How does your election tracker work?

We use a programming language called Stan that we developed here at Columbia, which allowed us to program a model that combines information from economic and political “fundamentals” along with state and national polls. We re-fit the model when new polls come in.

How does Stan, the programming language you and your colleagues developed, work?

Stan was a collaboration between many people. Its sampling algorithm was written by computer scientist Matt Hoffman, who was working as a postdoc with me at the time, and the program was designed and written mostly by Bob Carpenter and Daniel Lee. It uses Bayesian inference, a statistical approach developed by Pierre-Simon Laplace in the late 1700s. Laplace was a mathematician and physicist but also interested in the social science of his day. There have been many developments in Bayesian statistics since then, in particular with multilevel modeling—an approach to incorporating different sources of variation in a single analysis, in this case including variation across states, changes in opinion over time, and national and state polling errors—and with computing. In the past, data analysts could either choose among a small menu of codified models or be required to engage in a big computational and research effort if they wanted to reliably fit a new model. With a probabilistic programming language such as Stan, you can program up your own models and fit them right away. It requires a bit of validation and debugging for sure, but the whole process has really changed: The steps of modeling have become much more active. We were always able to do exploratory data analysis, but now we can do exploratory modeling.

Does your work offer any insight into how Vice President Harris’s chances compare to other Democrats who could, briefly, have run after President Biden dropped out?

From the political science literature based on past elections, I think other possible candidates would be doing about as well, but we can’t know.

What first drew you to politics and statistics?  Do you connect your interest in those topics to growing up in Washington D.C.?

I don’t really connect it to growing up in the D.C. suburbs. I’ve always been interested in politics which is something I share with a big chunk of the population. I studied math and physics in college, but at some point I got interested in statistics, which is interesting mathematically and also connects to so many areas of science and issues taking place in society all around you, all the time.

What is some recent research you’ve undertaken that doesn’t involve the election?

One recent project, this one led by Philip Greengard, who was a postdoc here at Columbia, proposes an improvement to an algorithm called Bayesian improved surname geocoding (BISG), a method for guessing people's ethnicity based on their last name and address. BISG is used in important contexts like evaluating claims of discriminatory lending, and it’s also used by big tech companies to get a sense of who their users are. Our research found that the tool has some systematic errors, as we found by looking carefully at data from Florida, Georgia, and North Carolina. It makes different sorts of errors in cities and rural areas.

Another paper, with computer scientists Chris Tosh and Daniel Hsu, along with Philip Greengard and Ben Goodrich, presents a bunch of mathematical results related to the so-called “piranha problem.” The problem deals with the fact that in a lot of social science research, small, random factors are reported as having large effects on social and political attitudes and behavior—factors like hormone levels, exposure to subliminal images, news of football games and shark attacks, a chance encounter with a stranger, parental socioeconomic status, weather, the last digit of one’s age, the sex of a hurricane name, the sexes of siblings, the position in which a person is sitting, and many others. Studies have claimed to find large effects from these and other inputs, but mathematically, it would be extremely unlikely to have all these large effects coexisting—they would have to almost exactly cancel each other out or we would see complete chaos.

We call this the piranha principle by analogy to the folk belief that if you had a fish tank full of piranhas, they would eat each other until at most one was left. (Apparently this is not the case with real piranhas.) Our mathematical results help us understand the reality that these claimed effects on social and political behavior are typically wildly overestimated and do not stand up to replication. The paper emerged from questions we had about the replication crisis in science, which is that a lot of landmark science and social science studies have not stood up to subsequent attempts to replicate their results, suggesting there’s some sort of bias in how the studies were designed or interpreted, and that the conclusions we’ve drawn from them are unreliable.

One way to think about the piranha problem is to look at the current election campaign, where lots of unexpected things have happened but the polls remain close. In a politically polarized country such as ours, the large impacts of partisanship and ideology do not leave much space for the purported effects of shark attacks and those other silly things.

What keeps you busy outside of work?

I recently wrote a play, “Recursion,” with Jessica Hullman, a colleague at Northwestern University, that was performed at a computer science conference. It’s loosely based on real life pioneers of computer science, and it has lots of jokes. It’s not Tom Stoppard, but I think people enjoyed it.

I’ve also been working with a friend on a multiplayer game called Buy That Guy that you play with a map and cards. Each player represents an interest group such as agribusiness or tech, and you buy and sell legislators who you can use to pass bills that get you money. The goal is to become the political leader by controlling the majority of the legislature. It’s cute and almost playable, but the game mechanics don’t quite click yet, so we’re still trying to figure some things out. I think that art is a little more forgiving than game design. If you have a play that’s 90% there, people can enjoy it, but a game that’s 90% complete isn’t going to be fun enough to play more than once. We still haven’t quite made it fun, but we enjoy the process of working on it.


Pin It

SMZ Banner Ad 300x600