Decision Labs

Methodology

How we select polls

We include polls from reputable organizations that meet basic methodological standards: disclosed sample sizes, defined survey populations, and publicly available topline results. We track polls from nonpartisan research institutions (Pew, Gallup, AP-NORC), commercial pollsters (Morning Consult, YouGov), academic programs (Quinnipiac, UMD), and advocacy-sponsored polls (AIPI, FLI).

Advocacy-sponsored polls are included but flagged. Their question wording may reflect the sponsor's policy perspective, which we note in our data.

Pollster quality tiers

Each pollster receives a quality rating from A+ to C based on:

  • Methodology — probability-based sampling (A+/A) vs. opt-in online panels (B+/B)
  • Track record — established reputation and historical accuracy
  • Partisan lean — whether the pollster or sponsor has a known ideological orientation
  • Advocacy sponsorship — polls commissioned by organizations with a policy position on AI

Trend lines vs. snapshots

This is our most important methodological rule: we never draw a trend line through data points from different pollsters.

Different pollsters use different question wording, different sample types, and different methodologies. A finding of "73% support regulation" from one pollster and "69% support regulation" from another does not mean support dropped 4 points — it means two different organizations measured a similar concept in different ways.

A poll result is shown as a trend line only when:

  • The same pollster asked the same question
  • The methodology was consistent across waves
  • There are at least 3 data points over at least 12 months

Currently, the only series that meets this standard is the Pew Research Center's "concerned vs. excited about AI" question, tracked since December 2021 across 6 waves.

Everything else is displayed as standalone snapshots: bar charts, comparison tables, and stat cards that present the data without implying false trends.

Topic clusters

We group polls into 13 topic clusters to make the data navigable:

General Sentiment
Risk Assessment
Regulation - Appetite
Regulation - Who Regulates
Regulation - Federalism
National Security
Trust in Institutions
Jobs and Economy
Kids Safety
Development Speed
International Cooperation
Infrastructure
AI Usage

Partisan breakdowns

Whenever a poll provides results by party identification, we include Democratic, Republican, and (when available) Independent breakdowns. The partisan gap — the percentage-point difference between D and R responses — is a core metric throughout the tracker.

This partisan lens is our key differentiator. Most poll aggregators report topline numbers. The story is often in the partisan splits — where the parties agree, where they diverge, and where surprising coalitions emerge.

Poll-of-polls average

For topic areas where multiple pollsters have measured similar concepts, we calculate a weighted average to provide an aggregate indicator. This is not a precise statistical estimate — it is a directional summary that accounts for differences in poll quality, sample size, and recency.

The weight formula combines three factors:

  • Sample size: Larger samples receive more weight. Polls without a disclosed sample size receive a default 0.5 weight.
  • Recency: More recent polls are weighted higher using exponential decay (half-life of approximately one year).
  • Pollster quality: Our quality tier ratings (A+ through B) translate to numeric weights. Higher-rated pollsters contribute more to the average.

The final weight for each poll is the product of these three factors. The average is the weighted mean of all poll values in that topic area.

Caveat: This average aggregates polls that ask related but not identical questions using different methodologies. Treat it as a general indicator of the direction and magnitude of public opinion, not as a precise measurement.

Cross-pollster comparisons

When displaying results from different pollsters side by side, we always:

  • Note that question wording, methodology, and sample type differ
  • Sort by value (not by date) to make the pattern visible
  • Include source and date on each data point
  • Flag advocacy sponsors

Known limitations

  • Question wording effects: How a question is framed dramatically affects the response. "Support a national AI policy" (76%) vs. "oppose a moratorium on state AI regulation" (59%) measure the same issue from different angles.
  • Sample type matters: "U.S. adults" vs. "registered voters" vs. "likely voters" are different populations.
  • Sponsor bias: Advocacy organizations may design questions that elicit favorable responses. We flag but include them.
  • Recency: The AI landscape changes rapidly. A poll from early 2024 may reflect a very different information environment than one from late 2025.