Exposing misinformation online is hard to do at scale and can veer into outright censorship. The wisdom of crowds can lead us to the answers.
Contemporary debates over online speech are stagnant and unimaginative. People have bemoaned the spread of misinformation, conspiracy theories, and hate speech on social media platforms for years, and proposed regulations that aim to solve the problem – but often at the expense of free speech, privacy, or competition. There is an alternative option: prediction markets. Prediction markets can discipline and expose deceptive claims in a way that avoids political bias and curbs on people’s freedom and privacy online.
Talk is cheap, as the old saying goes. This is especially clear on social media platforms. On Twitter, Facebook, Youtube, and a plethora of other online venues, users post comments knowing that they will incur no penalty for spreading falsehoods. At a time when creating anonymous and pseudonymous accounts is easy and free, such users also know that it is unlikely that they will pay for such falsehoods with embarrassment, shame, a worsened reputation, or a libel lawsuit in the ‘real world’. Not all peddlers of conspiracy theories use pseudonyms. Alex Jones, the American conspiracy theorist, is perhaps the best contemporary example. Yet some of the most prominent sources of misinformation use pseudonyms. The real world identity of Q, the creator of the QAnon conspiracy, remains unknown to the public, and the identities of those who began spreading the Pizzagate conspiracy have yet to be confirmed.
Before Covid-19, conspiracy theories about presidential elections were Silicon Valley’s biggest US-based misinformation worry. As Covid-19 spread, so did misinformation about it, and prominent social media companies took steps to hamper the spread of harmful health content. But the size of the internet made this difficult.
At the scale of modern social media networks, false positives and false negatives are inevitable. Almost 500 hours of video content are uploaded to Youtube every minute, and each day almost two billion Facebook users post billions of pieces of content including videos, pictures, event pages, comments, and marketplace sale advertisements. Tools that use AI can identify and remove some content known to be illegal or in violation of the platform’s policy, but even these systems sometimes make mistakes and let violative content through their nets.
Scale is only one of the challenges facing social media firms. Often, content moderation decisions hinge on the context of content rather than the content itself. A platform may remove a video of a child being bullied by his classmates if one of the bullies posts it along with denigrating comments, but may keep the video if an anti-bullying charity posts the same video to draw attention to their cause. It is not unusual for lawmakers across the world to cite instances where content is incorrectly flagged as justification for regulatory and legislative reform.
Content moderators and social media executives must consider scale and context when it comes to misinformation and disinformation. Not only can harmful content be shared around the world to millions with ease, but the context in which such content is discussed can make effective content moderation difficult. Most social media platforms will treat a BBC News report about disinformation and a conspiracy theorist’s video in different ways. Such nuances make effective content moderation at scale impossible to do perfectly. Firms can improve content moderation tools, but at scale there will never be a rule that can be enforced with perfect accuracy – but we can try to make them better.
Over the last 100 years researchers have found examples of individuals collectively acting as good fact checkers, even if most individuals are woefully ignorant of the topic in question. Perhaps the most famous example of this is the famous ox-weight-guessing experiment discussed by Francis Galton in the pages of Nature in 1907.
Galton attended the West of England Fat Stock and Poultry Exhibition, held in Plymouth in 1906. Attendees could pay sixpenny to guess the weight of an ox, with prizes offered to those whose guesses were closest to the ox’s actual weight. There were 787 guesses in total. Many of the guesses were wildly inaccurate, but an average of all of the guesses was within one percent of the ox’s actual weight.
Another example of the ‘wisdom of crowds’, as James Surowiecki called it in his book of that title, is the search for the USS Scorpion, a submarine that went missing in 1968. The US Navy did not know how far the Scorpion had traveled since its last reported location. A naval officer, John Craven, collected experts (e.g., mathematicians, submariners, etc.) and asked them to guess the Scorpion’s location without allowing them to consult with one another. The experts placed bets on the location, with a bottle of scotch as the prize. Using Bayes’s theorem, Craven built a collective estimate of the group’s guesses, which ended up being only 220 yards from the Scorpion’s wreckage.
Betting markets are the ultimate form of the wisdom of crowds, since not only does everyone get a say, but their say is weighted by their confidence. They have been around for hundreds of years. Residents of Italian sixteenth-century city-states could bet on the outcomes of papal elections. In the US, political betting markets are as old as the country, with betting becoming especially fierce during the Jacksonian era. According to the economists Paul Rhode and Koleman Strumpf, during this time newspapers drove political betting, and ‘election betting articles that appeared in the press were boasts or challenges rather than reports of actual wagers transacted’. Political betting markets have appeared over the last few hundred years in Canada, Australia, and New Zealand, despite such markets often operating absent legal approval. Rhode and Strumpf point out that election betting in New Zealand was prevalent in the late nineteenth and early twentieth centuries despite newspapers not publishing betting odds, which was apparently illegal. In Australia, newspapers continued to list election wagers even after the government passed a bill making election wagers illegal.
Political betting markets were often frowned upon long before they emerged in Australia and New Zealand. In 1591, Pope Gregory XIV banned betting on papal conclave elections, the length of a papal reign, or the appointment of cardinals. Unlike in the UK, betting on election outcomes is prohibited in every US state, notwithstanding some partial exceptions. In the UK, common law allowed for betting, but prohibited bets that might lead ‘to immorality or impolity’ like betting on the outcome of a war or the death of a politician.
There are of course betting markets that forecast nonpolitical events. The markets on sports in the form of sports books, where lines are widely agreed to be so accurate that they are hard to profit from, are perhaps the best-known examples. Prediction markets get people to bring information into the open by effectively paying them for revealing it. If you know something that the market doesn’t, and you think the market price should be higher or lower than what it currently is, then you can buy or sell, shifting the market in the direction of your view. Thus, in effect, betting markets encourage people to reveal information, then to collectively balance it and judge what it means overall. This is, of course, also what happens in securities markets, like stock markets.
Political and social betting markets could, however, be different from sports and financial markets. One reason is liquidity: Not many people are actually using them, either because of regulation or lack of interest. Another related reason is that they might be vulnerable to deliberate manipulation. In low-liquidity markets, big players can bet on unlikely outcomes to try to shift their likelihood. In elections it is often believed that ‘strong horse’ candidates that are more likely to win become more popular, and there have often been suggestions that supporters of one or other candidate are deliberately piling into betting markets to raise the price on their favorite.
Ultimately, the question of whether betting markets work is empirical. For elections, empirical research suggests that betting markets can, with the right conditions in place, provide observers with more accurate predictions than opinion polls. For example, a study of the 2001 Australian election found that betting markets outperformed opinion polls not only in predicting the winner of the election but also dozens of marginal seats.
One of the most prominent prediction market sites is Predictit, a New Zealand–based prediction market, operated by Victoria University of Wellington. It’s not a fully functioning market due to low limits on betting as well as high transaction and withdrawal fees (and it is currently facing a CFTC challenge). But as it currently works, users can bet on political events such as elections by buying shares in ‘Yes’ or ‘No’. Shares trade between $0.01 and $0.99, with the actual outcome paying out one dollar – conveniently meaning that the price reflects the market’s probability judgment for a given outcome. For example, in a congressional race a Predictit user, John Diviner, may believe that the underdog Candidate B is likely to win and sees that on the Predictit market for that race ‘Yes’ shares for Candidate B are trading for $0.15 each. Diviner buys 250 shares of ‘Yes’ for $37.50. Later, it turns out that Diviner was right and Candidate B did win. Because Candidate B won, Predictit redeems Candidate B ‘Yes’ shares at one dollar each, resulting in Diviner walking away with a nice profit.
Another prominent set of prediction markets are those found on the Iowa Electronic Markets (IEM) platform. Set up by University of Iowa faculty in 1988, the two most common IEM bets are firstly which candidate will win an election race, and secondly the percent of the vote a particular candidate will receive.
The IEM performs well. Summarizing a study of IEM in The Wisdom of Crowds, Surowiecki notes:
So how has the IEM done? Well, a study of the IEM’s performance in forty-nine different elections between 1988 and 2000 found that the election-eve prices in the IEM were, on average, off by just 1.37 percent in presidential elections, 3.43 percent in other U.S. elections, and 2.12 percent in foreign elections . . . The IEM has generally outperformed the major national polls, and has been more accurate than them even months in advance of the actual election. Over the course of the presidential elections between 1988 and 2000, for instance, 596 different polls were released. Three-fourths of the time, the IEM’s market price on the day each of those polls was released was more accurate. Polls tend to be very volatile, with vote shares swinging wildly up and down. But the IEM forecasts, though ever-changing, are considerably less volatile, and tend to change dramatically only in response to new information. That makes them more reliable as forecasts.
Google has also built prediction markets to forecast the development of its products. In his book Infotopia: How Many Minds Produce Knowledge, the economist Cass Sunstein published a list of some of these Google prediction markets, which included: ‘Total 30-day active Gmail users’, ‘Number of wireless pageviews’, and ‘Will Google open an Israel office?’
According to Bo Cowgill, the Google prediction market project manager, the company’s prediction markets yielded valuable probabilities.
Like Google, Microsoft, Yahoo, France Telecom, and Hewlett-Packard have also used internal prediction markets to help inform business decisions.
Prediction markets do have limitations and do not always yield accurate predictions. Tradesports.com hosted a prediction market on President Geroge W Bush’s first Supreme Court nominee. A day before Bush announced the nomination of John Roberts to be the next chief justice of the Supreme Court, the market price implied a 1.9 percent chance he would be nominated.
More recently, commentators have pointed to the 2016 US presidential election and the Brexit referendum as examples of prediction markets producing poor forecasts. On the day before the 2016 US presidential election, Predictit was selling Clinton ‘Yes’ shares for $0.82 and selling Trump ‘Yes’ shares for $0.22, thereby suggesting to observers that Clinton was the clear favorite. The day before the British vote to leave the European Union, Predictit was selling shares in ‘Yes’ for $0.26 each. But such examples should not prompt us to discard prediction markets entirely – polling, aggregators, and experts also made poor predictions in these cases.
Prediction markets are not soothsayers and unlikely events do occur. If someone asks you, ‘What is the chance that I’ll get heads in four consecutive coin flips?’ you could respond, ‘One in 16. It’s pretty unlikely!’ If that person then goes on to get heads in four consecutive coin flips you were not a bad predictor to suggest that such an outcome was unlikely. In the case of the Brexit referendum and the 2016 US presidential election it is possible that two unlikely events occurred.
But there are also other factors that might have played a role. For voters who are undecided, recent news events can play an outsize role in shifting their minds during the last few days before an election. Groupthink and conventional wisdom can also affect people’s predictions of events of global political interest, making them complacent about evidence that conflicts with their views.
Even if betting markets and prediction markets are not always predictive, they can nonetheless reveal valuable information. The prediction market Predictit and forecasting platform Metaculus maintain leaderboards of predictors. Those keeping an eye on prediction markets, for example, can observe which participants are the best predictors.
Social media companies could build tools that allow for users to aggregate predictions. Such tools would allow users to make guesses on, for example, the number of British people who will test positive for Covid-19 in 2023. A social media company could offer money or points to the user who comes closest to picking the correct number and provide other users with a potentially useful database of predictions.
In fact, there is no reason in principle why a social media firm could not establish a prediction market option for users, thereby allowing users to bet on the accuracy or falsehood of a claim. A social media platform with a prediction market would allow for users to post a comment and signal to others that they are open to bets on the comment’s truth or falsehood. It would also allow users to express their confidence in an event occurring.
As the economist Alex Tabarrok once elegantly put it, ‘A bet is a tax on bullshit’. Over time, accounts could develop reputations for epistemic humility, or track records of sharing accurate information.
In contemporary social media platforms, there is no such system. A pseudonymous user can post ‘Covid-19 didn’t originate in a lab’, or ‘In 15 years the ice caps will have melted’ without giving a signal of how confident they are or whether they have a track record of making accurate predictions. With prediction markets in place, another user could place a bet against, for example, the rate of sea level rise. Those who know more than the average user about climate science would have an incentive to flock to the market associated with the post.
Social media prediction and betting markets would not have to be attached to contentious issues, and it is not always anonymous or pseudonymous accounts that post predictions. Simple errors such as incorrect historical dates or misattribution of quotes could also be addressed with betting markets. If someone posts a comment claiming that William the Conqueror invaded England in 1065 someone could offer bets against that claim being true. Whereas a system in which users could simply vote on the truth of statements would be vulnerable to political bias, those offering bets would have skin in the game.
Given the scale of content on social media platforms, it is worth asking who would resolve prediction markets placed on a network such as Twitter, a relatively small social media platform when compared to Facebook and Tiktok. Roughly 6,000 tweets are composed a second. Even if only half a percent of tweets became the subject of a prediction market that would still mean 30 prediction markets emerging on the network every second.
One option would be for a social media company to allow trading to begin only after a certain number of people – say, a thousand users – support allowing bets on a comment. Another option would be to build the functionality for prediction markets and leave the resolution of these markets to institutions and users that emerge. A user could build their own market with a settlement date and terms. Were such a system in place in 2018, a Twitter user could create a prediction market with the conditions: ‘The contract will resolve to Yes if on 23:59 December 27, 2019, abortion is illegal in at least 20 states’. The same user could then nominate another user to act as the adjudicator of the market in case participants dispute the outcome.
On a social media platform with prediction markets, users might be able to garner rating points related to how often they correctly predict outcomes. Over time, these points could be used as a de facto reputation assessment tool, with users knowing to be wary of others who either do not partake in prediction market bets or have a low rating. Newspapers could use such rating points to highlight the accuracy of their reporters’ stories. It is possible that papers may choose to express confidence in their own reporters’ stories by fronting money for prediction market bets associated with their reporting.
Which users would make good adjudicators for prediction markets? Platforms could provide incentives to users to act as arbiters for markets by offering them a percentage of the total bet. With such an incentive structure in place, users could advertise themselves to the platform as being fair-minded and qualified to resolve disputes. Some institutions, such as universities, labs, think tanks, museums, etc. could advertise themselves as arbiters for specific markets.
There are examples where mediators have emerged online on other social platforms that we could learn from. Eve Online, a massively multiplayer online role-playing game provides examples of some players developing expertise in conflict resolution. For example, Sean Smith, a US diplomat, used his professional skills to market his Eve Online character, Vile Rat, as a diplomat. After Smith was killed during the 2012 assault on the US embassy in Benghazi, Libya, Eve Online players cited Smith’s diplomatic skills in their tributes. Because Eve Online allows its hundreds of thousands of players to join coalitions, forge alliances, and engage in trade and piracy, diplomats are a regular feature of the game.
Social media prediction markets would likely prompt other institutions and norms to emerge. Social media users have built bots to monitor all kinds of behavior, such as a Twitter account tracking changes to New York Times articles and a Mastodon account that posts screenshots of Wikipedia edits emanating from Congressional IP addresses. The account emerged after Twitter suspended the @CongressEdits account after someone using a Congressional IP address uploaded personal, identifiable information about lawmakers to Wikipedia.
Prediction markets would provide opportunities for similar roles to emerge, with users building tools to identify noteworthy markets or bets made by prominent figures (e.g., journalists, politicians, celebrities, etc.). Traditional financial institutions, no strangers to identifying arbitrage opportunities, would undoubtedly involve themselves in prediction markets if there was money to be won. Users could use results from prediction markets to rank different institutions such as universities and journalism outlets by order of success in the markets. Such institutions could also demonstrate faith in their employees by offering to bet on the accuracy of their predictions.
How would this help with misinformation? A social media site could automatically boost the content associated with users with a track record of making good predictions and true statements while limiting the reach of accounts with a track record of making inaccurate predictions. This is a more attractive content moderation strategy than the centralized content moderation strategies used by Facebook, Twitter, and Youtube. Under this new system, the social media companies would be able to avoid much of the backlash they currently face from across the political spectrum by showing that the boosting and down-rating of content is a feature of user predictions, not a product of centralized design.
In the last few years, we have seen social media sites take steps to address the spread of misinformation and disinformation. Twitter attached fact-checking labels to tweets that it believed spread misinformation about Covid-19 and the 2020 presidential election, including tweets written by then-president Donald Trump. Twitter has also launched a project with Community Notes, a crowd-sourced fact-checking program previously named Birdwatch. The program allows moderators to add notes to tweets; Twitter users can signal their approval of these notes by voting. Youtube has added information about sources of videos (e.g., the BCC, Al-Jazeera, etc.) as well as links to Covid-19 resource pages. It has also attached addenda to videos on the flat-Earth theory and Holocaust denial with links to more reputable sources.
These may have improved speech, but they have also risked polarization. Among the American political right there is widespread mistrust of contemporary social media companies. According to a 2021 Cato Institute survey, 90 percent of American conservatives do not trust social media companies. If market incumbents such as Twitter, Facebook, and Youtube implemented prediction markets it is possible that such a change would result in some American conservatives trusting social media companies more, especially if those companies allowed third parties to act as arbiters and removed themselves from prediction market disputes. Yet many American conservatives would not be convinced. After all, social media companies would still be governing the markets.
But we must not make the perfect the enemy of the good. With prestige inevitably attaching to social media users with a track record of earning money via debunking nonsense claims it would be harder for many peddlers of cheap talk to retain their devout followers. It would also be harder for pessimistic journalists and commentators to continue pouring cold water on emerging trends. Hundreds of articles have announced the death of the still much alive Bitcoin, and Tesla’s stock price suggests those who have predicted the decline of the auto manufacturer over the last few years are not the best forecasters. Financial journalism is especially ripe for prediction markets given financial journalists’ propensity to overstate the negative and to understate the positive.
If prediction markets in social media have so much promise, why haven’t they emerged? Sports betting, lotteries, luck gambling, and stock markets are extremely widespread in all the places they are allowed. But political markets have typically not arisen. Some might argue that prominent social media sites lack an incentive to implement prediction markets because as long as they generate revenue from advertisement sales they can tolerate a significant amount of harmful content.
The most important obstacle is that their legal status is at best unclear, especially in the United States, where most of these services are based. The Iowa Electronic Markets are available to American users thanks to a no-action letter from the Commodity Futures Trading Commission (CFTC). Yet recent history shows that such letters hardly provide an ironclad permission to operate. In August 2022 the CFTC revoked its permission for Predictit to operate, claiming that Predictit was operating outside the terms of its no-action letter. Even if gambling on elections was legal in the United States, the varied state laws governing online gambling (which remains illegal in most jurisdictions) would pose compliance headaches.
Such legislative barriers should be removed so that innovators and entrepreneurs feel free to incorporate prediction markets into online speech platforms. Augur, a decentralized prediction market platform built on the Ethereum blockchain, is one example of a project that would have more of a chance to thrive absent legislative restraints. Polymarket, built on the Polygon blockchain, is another prediction market platform. Both Polymarket and Augur do not allow US residents to trade or bet on their platforms thanks to legislative and regulatory constraints.
Yet there is an argument that social media prediction markets have yet to take off because they might require a subsidy in order to launch, perhaps because the information provided by such markets yields positive externalities and thus is underprovided by the market.
However, this is not a dead cert. Many companies benefit, and profit, from their reputation as being a place that only hosts true facts. This may be why recent evidence suggests that the media rarely actively lies. Almost all mistruths told by newspapers and online publications are based on acts of omission, and are based on telling a range of truths without other important contextual truths. The major publications hold particularly closely to this standard, even when they mislead through emphasis, wording, and leaving facts out. As we saw, social media companies already go beyond what’s legally required in order to raise the reputation of their sites. Prediction markets would give them another tool to do that.
But prediction markets would not just help them address misinformation. They may also lead to a culture of more careful predictions, better predictions, and more useful information. Information is extremely messy, complex, and hard to organize. This is especially true about the future. Publications like Fivethirtyeight and Realclearpolitics have built their markets based on having better data and predictions than the competition. Google builds in betting market odds as a predictor into its live sports feeds. Carefully weaving prediction markets into a social media site like Twitter could do a similar thing for them, raising their reputation at a time when it is in the balance.
If prediction markets are legalized and fail to take, then that might just prove they are ill-suited to the task. But governments could experiment with subsidizing prediction markets to give them more liquidity. This would mean taking a range of bets on them at going rates, providing counterparties for other users.
But there are also good reasons why government officials would be wary to do so. It is likely that many members of the public would look at the government subsidizing bets on elections, court decisions, popularity of political figures, and other areas related to politics and politicians as suspicious and would likely prompt accusations of corruption that would undermine the legitimacy of firms seeking to build prediction markets. A similar problem might occur with the otherwise attractive option of philanthropic funding. Such interventions by rich funders have fed the fires of conspiracy theories before, and could easily do so again.
The spread of conspiracy theories, health misinformation, and propaganda occurs on platforms where billions of users meet to share and consume content, all of which they can do without having to pay. Lawmaker responses to this kind of harmful content have been either antithetical to free speech and privacy or anti-competitive. Fortunately, prediction markets could serve as institutions to improve the state of online speech.