0.08
Combined Brier Score
1 day before election
85.9%
Winner Prediction Accuracy
Across both platforms, shared elections
88.6%
Platform Correlation
Between shared elections
6,810
Electoral Markets
Election prediction markets

Live Last update:

Combined Brier Score
1 day before election
Winner Prediction Accuracy
Across both platforms, shared elections
Platform Correlation
Between shared elections
Electoral Markets
Election prediction markets

Aggregate statistics across all 15 political market categories, combining data from both Polymarket and Kalshi platforms.

Aggregate Statistics by Political Category

Market counts and trading volume across all political categories (Polymarket + Kalshi combined).

Electoral Markets by Election Type

Breakdown of electoral markets by election type, showing coverage on each platform.

We measure prediction accuracy using the Brier score—a metric where 0 is perfect and 1 is worst. Markets consistently achieve scores below 0.10, indicating strong predictive performance.

Accuracy Improves Near Election Day

Each line represents a cohort of markets open at least X days before resolution. Markets with longer trading windows show lower Brier scores, indicating better predictions.

By Election Type

Brier scores across different election types (top 10 by volume).

By Political Category

Brier scores across all 15 political market categories.

Accuracy by Race Closeness

Brier scores by election margin. Close races (< 5% margin) are harder to predict, resulting in higher Brier scores.

A well-calibrated market means predictions match reality—when markets say 70%, outcomes should occur 70% of the time. We use quantile binning (equal sample sizes per bin) to ensure reliable estimates across the probability spectrum.

Predicted vs Actual Outcomes

When markets predict 70%, outcomes should occur 70% of the time. Points near the diagonal indicate accurate calibration.

Where Markets Place Their Bets

Markets favor confident predictions near 0% or 100%, with fewer predictions in the uncertain middle ground.

Market liquidity analysis based on historical orderbook data. Tighter spreads and deeper books indicate more liquid markets where traders can execute with less slippage.

Bid-Ask Spread by Category

Median relative spread (%) by political category. Lower spreads indicate tighter markets with better liquidity.

Liquidity vs Prediction Accuracy

Each line shows how Brier score changes across liquidity percentiles within a category. Downward slope = higher liquidity improves accuracy. Dot size = number of markets.

Spread Distribution by Platform

Distribution of relative spreads across platforms. Which platform offers tighter markets?

Spread vs Trading Volume

Do higher-volume markets have tighter spreads? Each point represents a market.

Liquidity Over Time

Daily median relative spread (%) across all active markets. Lower spreads indicate tighter, more liquid markets. Data available from Oct 2025.

Do prediction markets exhibit partisan bias? We analyze whether markets systematically over- or under-predict Republican victories, comparing calibration and regression results across both platforms.

Republican Win Probability Calibration

Predicted Republican win probability vs actual Republican win rate. Points above the diagonal indicate markets underestimate Republican chances; below indicates overestimation.

Partisan Bias Regression

OLS regression of prediction error on party and platform. Negative coefficients indicate the market was less confident in the winner (underpriced).

Distribution of Trader Partisanship

Among traders who bet for a party, what % of their volume was on that party? Broken down by trading activity. Polymarket only.

Distribution of Trader Accuracy

For each wallet, what % of their money was bet correctly? Broken down by trading activity level. One-time traders cluster at extremes (0% or 100%); active traders show more varied distributions. Polymarket only.

Trader Partisanship: Actual vs Perfect

What if all incorrect bets were flipped to the winning side? Compares actual partisanship with counterfactual. Filtered to traders with 2+ trades. Polymarket only.

Polymarket and Kalshi often list the same elections, allowing direct comparison. On shared elections, both platforms show remarkably similar predictions with 96.7% correlation.

Platform Agreement

Each point shows both platforms' final prediction for the winning candidate. Proximity to diagonal = agreement.

Head-to-Head Results

Which platform was more accurate on shared elections?

Platform Comparison

Key metrics across both prediction markets.

We track 23,000+ political prediction markets across both platforms, covering elections, policy, appointments, and more. Electoral markets represent the largest category by both count and volume.

Trading Volume Over Time

Monthly volume in millions USD. November 2024 saw record activity during the US presidential election.

Markets by Category

Distribution of political markets across 15 categories.

Prediction Confidence vs Trading Volume

How trading volume relates to prediction confidence. Points colored by outcome correctness. Use buttons to toggle platforms.

Kalshi and Polymarket both let traders bet on US GDP growth, but they structure the bets differently — making direct comparison impossible from raw prices alone. We use statistical modeling to translate both into a common language: what does each market think GDP growth will be? When two independent markets with real money at stake agree, that's a stronger signal than either one alone. When they disagree, it reveals where genuine uncertainty lies.

Implied GDP Distribution

Each curve shows what a market's traders collectively believe about GDP growth. The peak is the most likely outcome; the width shows how uncertain traders are. Shaded bands represent the range of plausible curves given market noise — where bands overlap between platforms, the markets essentially agree.

Contract Prices vs Fitted Model

How well does our model match the actual market prices? Dots are real prices traders are paying; the line (or bars) show what the fitted model predicts. Close alignment means the model captures what the market is saying. Kalshi sells "GDP will exceed X%" contracts at multiple thresholds. Polymarket sells "GDP will land in this range" contracts for different buckets.

Probability Queries

Once we have a full probability distribution from each market, we can answer any question about GDP — not just the specific thresholds each platform happens to trade. The 95% CI columns show the range of plausible answers given market noise. Small differences between platforms suggest genuine consensus; large differences highlight where the markets disagree.

Expected GDP Over Time

How has the market's GDP expectation shifted day by day? Each point is a full distribution fit for that day's prices; the smoothed line filters out daily noise to reveal the underlying trend. Both markets saw GDP expectations fall sharply from ~3.3% in late January to ~2% by March — likely reflecting tariff uncertainty. The confidence bands show how precisely we can estimate the trend.

Model Uncertainty Over Time

How sure are traders about where GDP will land? A lower standard deviation means the market is concentrating bets around a narrow range; a higher value means traders see a wider range of plausible outcomes. Tracking this over time shows whether consensus is building or dissolving.

Why This Matters

Prediction markets are often cited as a single number — "markets say there's a 48% chance GDP exceeds 2%." But a single contract only tells you about one specific threshold. The full picture — what do traders collectively believe about the entire range of possible outcomes? — is locked inside the prices of many related contracts, and it's different on every platform.

This analysis unlocks that picture. By fitting a probability distribution to each platform's contract prices, we can answer any question about what the market expects — not just the questions each platform happens to list — and compare the two platforms on equal footing.

The Problem: Apples vs Oranges

Kalshi trades "will GDP exceed X?" contracts at 8 different thresholds (1%, 1.5%, 2%, etc.). Each price tells you the probability that GDP growth will be above that line. Together, these trace out a survival curve.

Polymarket trades 7 range buckets ("GDP between 1% and 1.5%", "between 1.5% and 2%", etc.). Each price tells you the probability of landing within that range.

You can't directly compare a Kalshi "P(GDP > 2%)" price with a Polymarket "P(1.5% ≤ GDP < 2%)" price — they're answering different questions. But both encode information about the same underlying reality: where will GDP actually land?

The Solution: Distribution Fitting

Step 1: Verify they're measuring the same thing. We parse each platform's resolution rules and confirm both settle on the same data release: the Bureau of Economic Analysis (BEA) Advance Estimate of Q1 2026 GDP growth.

Step 2: Align timestamps. Kalshi and Polymarket report prices on different schedules. We match them to the same calendar dates so every comparison is apples-to-apples.

Step 3: Fit a bell curve. We find the Normal distribution (bell curve) whose shape best matches each platform's contract prices. A contract with a tight bid-ask spread and high trading volume gets more influence on the fit, since it carries more reliable price information. We also test Skew-Normal and two-component mixture models, and pick the simplest model that fits well.

Step 4: Quantify uncertainty. With only 7–8 contracts per platform, the fitted curve could shift if a few prices changed. We simulate this by randomly resampling the contracts 1,000 times and refitting each time. The range of results gives us confidence intervals — a built-in honesty check on how precisely we can pin down what the market thinks.

Step 5: Compare. Now both platforms speak the same language — a full probability distribution over GDP growth. We can ask any question ("what's the probability of a recession?", "what's the expected value?") and compare the answers, with error bars.

How to Read the Charts

Distribution chart: Each bell curve shows one platform's implied beliefs. Where curves overlap, markets agree. The peak is the most likely GDP outcome; wider curves mean more uncertainty. Shaded bands show the range of curves that are consistent with the raw market prices.

Fit chart: Shows how well our model matches real prices. If dots sit on the line, the model is capturing what the market is saying. Gaps between dots and the line show where the model approximates.

Probability table: The payoff — precise answers to questions like "what's the chance GDP exceeds 2%?", derived from the fitted distribution. The CI columns tell you how confident we can be in each number.

Time series: How expectations have shifted over time. The smoothed line filters out daily noise. When both platforms trend in the same direction, it's a strong signal; divergence suggests emerging disagreement.

Stat cards: E[GDP] is the expected value — the market's best single guess. Gap measures how far apart the two platforms are. JSD (Jensen-Shannon Divergence) measures overall disagreement across the entire distribution; values below 0.05 indicate near-consensus.

Methodology & Papers