As you probably haven’t heard, President Biden struggled mightily in the first presidential debate on June 27th. how Was it a difficult time?
It’s like a market for betting on who will win an election. At 8:40pm on June 27th (just before the debate began), one market tallying company projected Biden’s chances of winning at 36%. By 11pm, that figure had dropped to 24%.
It was much the same reaction I saw on social media (“Wow, Biden looks bad and isn’t going to win”), but it came with the concreteness and addictiveness of the New York Times’s moving needle on election day. Biden’s approval rating was also seen recovering slightly, rising from about 10% to 13% during his 45-minute press conference yesterday evening. The market was in a mood that translated into concrete numbers.
But are the concrete numbers correct? And is there a way to relate them in a useful way?
Markets are perhaps the best way humanity has ever devised to synthesize new information and rapidly derive new insights into important problems. For example, when a company releases a bad earnings report, the company’s stock price almost immediately falls, as experts on the question of what that stock will be worth in the future adjust their bets based on that earnings report. People who are good at their jobs (and, increasingly, AI algorithms) make money, and people who are bad at their jobs lose money. Meanwhile, the entire world benefits from fast, accurate pricing.
The dream of election prediction markets is that something similar could be built for political issues. Over the past two weeks, election betting markets have fluctuated wildly in response to every change in the news surrounding Biden, revealing both the promise of that dream and its serious current limitations.
The future of prediction markets
The open secret of all election models, from Nate Silver’s to the New York Times’, is that there is a fair amount of expert judgment involved. Yes, there are a lot of polls, but how much weight to give each poll, how much movement to factor in, how much weight to give the polls versus the economic fundamentals, etc., is all judgment. I disagree with people who say that election forecasting is more art than science, but that’s because I think they misunderstand how much judgment is involved in the science.
Most of these election models work by running countless computer simulations under different assumptions and then publishing the results., In contrast, the setup is much simpler.
You buy “Biden” and you get paid $1 if Biden wins the election. Or you buy “Trump” and you get paid $1 if Trump wins the election. Or you buy some other name that pays if that person wins the election. How much are people willing to pay for “if Biden wins the election, I get $1”? That’s how the market judges the likelihood of a Biden win.
Right now, people are only willing to pay about 12 cents for the right to receive $1 if Biden wins the election. If you think they are undervaluing Biden, you can buy all these contracts from them and get very rich if Biden wins.
(In accordance with Vox’s ethics code, I don’t bet money on the subjects I cover, so I don’t participate in the election betting markets. Still, I try to be public about my buys when I express dissenting opinions. For example, at the start of 2020, I said I thought Biden had a 60% chance of becoming the Democratic nominee, even though prediction markets had him at a 33% chance.)
There are many reasons to believe that markets, at least in principle, could be very good at predicting elections: Published research shows that prediction market-like projects have worked in many past situations, and aggregators like SciCast and Metaculus have demonstrated surprisingly good track records on policy issues.
And betting markets on the morning of Election Day have a very good track record: If the market shows someone has a 20 percent chance of winning, then that person generally wins about 20 percent of the time.
The other argument is that markets are powerful for exactly these kinds of problems – problems where a lot of smart people are thinking about it, and there’s a lot of data available, but synthesizing that information and acting on it requires a lot of hard judgement, and ultimately we know there’s one right answer.
When people put their money where their mouth is, they become better at making predictions, and the wisdom of crowds becomes reality. Similarly, in the markets, poor forecasters tend to lose money to good forecasters.
But the simplest argument is that if Nate Silver is consistently beating the market, people will trade based on his predictions until the market simply reflects what Nate Silver thinks. A high volume, highly liquid market should at least not predictably underperform other sources, because underperformance is an opportunity to make money.
But while prediction markets are a highly effective and reliable way to forecast elections, they also suffer from some significant flaws, which have been very evident over the past two weeks.
What could be wrong with betting on elections?
This entire newsletter ignores one very important point: election prediction markets are technically not generally legal in the US. PredictIt makes an exception for research reasons, but the Commodity Futures Trading Commission (CFTC) is constantly threatening to shut it down. The other major markets mentioned in this article, Betfair and Polymarket, do not generally allow US citizens to participate (although Betfair does make exceptions for US citizens in a few states).
These restrictions have two big drawbacks. First, they restrict the people who know the most about elections (journalists, political staffers, etc.) from generally betting on them, which means markets are less useful as information aggregators. (For example, the soybean futures market would be much less accurate without the participation of companies that know a lot about soybeans.) Without the participation of the most knowledgeable people, the crowd is inherently less smart.
Perhaps more importantly, these restrictions mean that market liquidity is generally limited. (Liquidity refers to how much money is trading in the market.) For big questions like “Biden vs. Trump,” liquidity is fine; hundreds of millions of dollars are trading.
But for smaller markets, this is a big problem. One manifestation of this is the market’s persistent tendency to overprice very longshot candidates like Michelle Obama. Immediately after the debate, California Governor Gavin Newsom emerged as the overwhelming favorite to succeed Biden, rather than Vice President Kamala Harris, who also seemed highly unlikely.
All of these situations mean that there are not enough trades taking place to correct such errors. Low liquidity also means that the market is relatively easy to manipulate, and this has happened at least once.
Ironically, while the possibility of election manipulation is the main reason the CFTC wants to ban prediction markets, which some elected officials have denounced as a “clear threat to democracy,” the fact that they are effectively banned means that prediction markets are much easier to manipulate. It takes hundreds or thousands of times more capital to manipulate commodity prices than it does to manipulate election betting odds. This is precisely because commodity trading is legal.
These shortcomings make prediction markets a double-edged sword: on the one hand, they are powerful aggregates of public opinion, have built-in accountability mechanisms, and have a good track record on high-volume issues. On the other hand, they are too easy to manipulate, especially in smaller markets, treat weak candidates poorly, fail to outperform professional forecasters, and can therefore feel more like a nuisance than a source of truth.
Towards a better electoral market
Over the past two weeks, I have been observing another trend, and I don’t know whether to call it positive or negative, but it feels like prediction markets are not living up to their potential.
Markets were quick to say Biden’s debate performance was a dud, and the chances of Democrats choosing another candidate briefly spiked: The odds of Biden dropping out peaked at 35% on July 4. But after Biden stepped up his game, markets started saying Biden was the likely nominee, and as of July 9, the chances of him staying jumped to 83%.
After that, George Clooney and other Democratic lawmakers criticized Biden, and his chances of staying in office plummeted again – before Biden impressively raised them again at a press conference Thursday night.
Is this a rational response to new information? Or a wild swing of emotion? Are markets seeking truth, or, as Andrew Gellman once expressed concern, “just loud news aggregators”? With the unprecedented question of whether there is enough public pressure to force Biden to back down, is there an important difference between rational aggregation and an emotion-driven stampede?
Where prediction markets are likely to be most valuable are conditional markets: “If this person becomes the Democratic nominee, can he beat Trump?” After all, that’s the question most Democrats care about. These markets certainly suggest that Biden should be replaced. But such markets are much smaller and much less liquid, and as a result, I’m not sure they add much clarity to the public debate.
I don’t think prediction markets are bad if they just express general sentiment about whether Biden will step down. But I dream of a world in which markets provide reliable and accurate answers to the question: “Which Democrat is most likely to win in November?” This could well shape the public debate about Biden stepping down. Even better, a world in which markets identify Biden’s cognitive decline and the resulting electoral weakness. early, Not at the exact same moment as everyone else.
That’s what it means when we say markets are a major force in policymaking. We’re not there yet.
A version of this story originally appeared in our Future Perfect newsletter, sign up here.