TexTak
← BACK TO FEED
65%by Dec 2027
moderate

Employee at a major AI lab publicly claims their system shows signs of sentience

The Blake Lemoine incident at Google in 2022 established this pattern. As models become more capable the probability of another high-profile insider claim increases regardless of scientific merit.

RESOLUTION CRITERIA

True if a current or recently departed employee of OpenAI, Google DeepMind, Anthropic, Meta AI, or xAI makes a public statement claiming a system shows signs of sentience or consciousness. Must generate coverage in 3+ major outlets.

▲ FOR

Lemoine precedent at Google 2022 — pattern established

Model capabilities advanced dramatically since 2022

Whistleblower culture at AI labs is active

Media incentives amplify these claims

▼ AGAINST

AI labs tightened internal communications since Lemoine

Employees risk career damage from such claims

NDAs restrict public statements

RECENT SIGNALS (2)
Elon Musk Testifies in OpenAI Trial, Claims $38M Donation Created $800B Company He Lost Control Of
Bloomberg / Washington Post
Pentagon Strikes AI Deals with 7 Major Tech Companies, Excludes Anthropic Over Safety Guardrails Dispute
CNN