6 open predictions with public probabilities, explicit resolution criteria, and tracked accuracy.How the system works →
The volume of AI-generated text, images, and video is growing exponentially.
True if credible research measures >50% of newly published internet content as AI-generated.
Generation costs approaching zero
Detection methods improving
Rapid adoption of coding and customer service agents suggests broad enterprise deployment is accelerating.
True if 3+ Fortune 100 companies publicly report autonomous agent deployment across multiple business functions.
Major cloud providers shipping agent frameworks
Hallucination rates still too high for regulated industries
The gap between open and closed models has been narrowing.
True if an open-weights model scores within 2% of the leading closed model on MMLU, HumanEval, and GPQA.
Meta investing heavily in open-source
Frontier labs have data advantages
Companies are quietly replacing roles with AI but avoiding public attribution.
True if a Fortune 500 company announces 1000+ layoffs with AI automation as the stated primary reason.
Back-office functions seeing headcount reduction
Companies avoid PR risk of attribution
AI radiology tools are closest to full autonomy, but the FDA's regulatory framework still assumes human-in-the-loop oversight.
True if FDA grants approval for an AI system to make diagnostic decisions without mandatory physician review.
Multiple AI radiology systems outperforming human readers
Liability frameworks don't exist yet
As AI systems gain more autonomy, the probability of a high-profile failure that forces coordinated regulatory action increases.
True if a specific AI system failure leads to binding regulation adopted by 3+ major economies within 12 months.
Frontier models in high-stakes domains with minimal oversight
AI failures have been embarrassing, not catastrophic