How TexTak Forecasts Work
Every forecast on TexTak is a falsifiable prediction with a public probability, explicit resolution criteria, and a tracked record. Here's how the system works from signal to score.
What Is a Forecast?
A TexTak forecast is a specific, time-bound prediction about the AI industry expressed as a probability. Not a hot take. Not an opinion column. A number — 35%, 68%, 74% — that says “we believe this is how likely this outcome is, given everything we can see right now.”
Each forecast has four components:
A specific, binary outcome. Either it happens or it doesn't. No weasel words.
Our assessed likelihood as a percentage. This number moves as new evidence arrives.
When we expect resolution — Q4 2028, Dec 2027, etc. Every forecast has an expiration date.
The exact conditions that determine whether the forecast resolved true or false. Written before the first probability is assigned.
How Probabilities Move
Every news story we ingest is tagged with signal data — which forecast it affects, and in which direction. A new FDA statement about AI diagnostics might push the “FDA approves autonomous AI diagnostic” forecast up by 2-3 points. A lobbying report against it might push it down.
The sparkline chart on each forecast tile shows this movement over time. The area under the line represents the accumulated probability. The dashed line at 50% marks the threshold between “more likely than not” and “still unlikely.”
Probabilities aren't predictions of certainty. A 70% forecast means: if we made 10 predictions at this confidence level, we'd expect roughly 7 to come true. The other 3 being wrong isn't failure — it's calibration working correctly.
Confidence Levels
Not all forecasts carry the same evidential weight. We tag each with a confidence level that signals how much data supports the current probability:
Strong signal density. Multiple independent data sources corroborate the probability. The evidence base is broad and the analytical framework is well-tested for this type of prediction.
Reasonable signal density, but with meaningful uncertainty about key variables. The probability reflects our best assessment but could shift significantly with one or two major developments.
Limited signal. This forecast is further out in time, involves more unknown variables, or covers a domain where historical patterns are less reliable. The probability is our best guess, not a confident assessment.
The Analysis Engine
Every forecast is produced by The Refractor — our proprietary signal analysis system. Raw AI news enters as noise. The Refractor passes it through five analytical lenses: Temporal (how signals age), Convergence (cross-domain alignment), Contrarian (disconfirming evidence), Source (credibility weighting), and Pattern (historical signature matching).
The specific weights and interaction models between lenses are proprietary. Read more about The Refractor methodology →
How We Keep Score
We use the Brier score — the standard accuracy metric in probabilistic forecasting. It measures the mean squared error between our predicted probabilities and actual outcomes:
The calibration strip on the homepage shows our current Brier score, how many forecasts have resolved, our hit rate, and the median forecast horizon. This data is always public. When we're wrong, it shows.
How Forecasts Resolve
When a forecast's horizon arrives, we check it against its pre-written resolution criteria. There are exactly three outcomes:
The criteria were met. The predicted event happened within the specified timeframe.
The criteria were not met. The predicted event did not happen.
The horizon hasn't arrived yet. The probability continues to update as new signals emerge.
See the Forecasts
6 active predictions, all with public probabilities and resolution criteria.