64% of New Internet Content Is AI-Generated. Our Forecast Was Right to Hold at 68%.
TexTak has held [ai-generated-media] at 68% — down from 71% — reflecting our honest uncertainty about whether volume metrics translate to durable dominance given rising consumer skepticism. Today's news drops a piece of direct evidence we weren't expecting this fast: a joint MIT CSAIL and Oxford Internet Institute study estimating that AI-generated content already constitutes 64% of all newly published internet material in 2026, with AI-written content outpacing human content 17:1. The forecast asks whether AI-generated content will exceed 50% of new internet media. By one rigorous academic measure, it already has. That matters — though not in the simple way the headline suggests.
Let's be precise about what the MIT/Oxford figure proves and what it doesn't. The study measures share of newly published material by volume — articles, social media posts, raw output. This is direct evidence on one dimension of our forecast: raw content volume. The 64% figure would, on its face, resolve [ai-generated-media] YES if we take publication volume as the operative measure. That's a meaningful result, and we weight it heavily. It moves from 'conditions exist for AI content dominance' to 'a credible academic institution has measured AI content dominance in the present tense.' That's the difference between proximate and direct evidence, and today's news is the latter.
But our forecast sits at 68%, not 85%, for a reason that the MIT/Oxford number doesn't dissolve: our original thesis was about durable content dominance, not a snapshot. The AGAINST column is not wrong just because the volume threshold crossed. Consumer preference for AI-generated content has dropped from 60% to 26% over three years. Detection methods are reaching 88% consumer accuracy. Platforms are implementing content policies with genuine enforcement teeth. The real question our forecast is tracking is whether AI content exceeds 50% in a stable, sustained way — or whether we're watching a wave that platforms and consumers will partially roll back through policy, filtering, and preference. The volume is there. The durability question is still open.
Here's the counterargument we take seriously: the platforms most flooded with AI content — content farms, SEO spam, social media — are precisely the ones where volume metrics are most susceptible to inflation. If AI-generated SEO spam constitutes 80% of new articles but 90% of it gets deindexed within weeks, 'published' may not mean 'present.' The MIT/Oxford methodology matters enormously here, and we haven't seen the full paper. If 'newly published' counts content that subsequently gets filtered, the 64% figure is a production metric, not a persistence metric. That distinction could significantly affect what the number means for our forecast.
What would move us? If Q2 platform enforcement data shows AI content share holding above 50% of indexed, discoverable material — not just published material — we'd push this above 75% with confidence. If major platforms successfully reduce AI content's discoverability share below 40% through policy enforcement by end of 2026, we'd drop below 55%. The 68% currently reflects: the MIT/Oxford direct volume evidence is strong and pushes us up, offset by the consumer preference collapse and platform policy trajectory that remains genuinely uncertain. We're watching Google Search index quality reports and platform-specific enforcement announcements as the next signal.