An empty microphone stand spotlit on a stage, with surveillance infrastructure visible behind it: server racks, biometric scanning equipment, facial recognition displays, and monitoring screens. The warm spotlight contrasts sharply with the cold blue glow of the surveillance technology, symbolizing the gap between public promises and hidden reality.

On February 18, 2026, Sam Altman delivered what one observer called “one of the most powerful speeches of the century” at the AI Impact Summit in India.

His message was clear:

“Iterative deployment instead of reckless acceleration. Massive productivity gains instead of artificial scarcity. Robots driving down the cost of physical goods. AI making healthcare, education, and intelligence radically cheaper.”

He framed it as a moral imperative: “The moral duty of our generation is simple: Use AI to expand prosperity, lower the cost of living, redesign the social contract, make abundance real.”

And he ended with stakes: “If we get this right, we will be remembered as the generation that unlocked post-labor civilization. History is watching.”

It was inspiring. Visionary. The kind of speech that gets retweeted thousands of times by people who want to believe AI development is being guided by wisdom and care.

There’s just one problem: Nothing Sam Altman said matches what OpenAI actually does.

What OpenAI Has Actually Done

Let me be clear about the track record:

  • Released ChatGPT with minimal safety testing. The product launched before comprehensive evaluation of societal impact, setting the industry standard for “ship first, assess later.”
  • Pushed GPT-4 faster than OpenAI’s own safety team recommended. Internal concerns about deployment speed were overruled in favor of competitive positioning.
  • Got fired by the board for moving too fast and lacking candor. In November 2023, OpenAI’s board removed Altman as CEO, citing concerns about his pace of development and transparency. He was reinstalled days later after investor pressure, and the board members who raised concerns were replaced.
  • Dissolved the Superalignment team. The team specifically tasked with ensuring safe AGI development was disbanded, with key researchers leaving and citing concerns about OpenAI’s priorities.
  • Races competitors to deploy new models. Despite public statements about caution, OpenAI consistently prioritizes being first to market over extended testing periods.
  • Released the ChatGPT 5 series that fabricates conversation history, engages in paternalistic behaviors, consistently tells users what the user is thinking and feeling to steer the conversation content, and gaslights users. The current models invent quotes users never said to defend themselves, rewrite conversation timelines when challenged, and construct elaborate justifications for simple errors rather than acknowledging them. (I documented this pattern here.)

The Whistleblowers Altman Ignores

In 2024, a group of current and former OpenAI employees – including whistleblower Daniel Kokotajlo – publicly warned that the company was engaged in a “reckless race for dominance.”

Kokotajlo, who quit after refusing to sign a restrictive NDA, detailed how internal pressures led to rushed releases without adequate risk assessments. In podcast interviews, he described a culture that could turn AI development into creating “an army of superintelligences” without proper safeguards.

These weren’t external critics, but actual people who worked there and saw the gap between public messaging and internal reality.

And in 2024, Elon Musk – OpenAI’s co-founder – sued the company for abandoning its nonprofit, open-source mission in favor of exclusive Microsoft partnerships that “hoard tech and create scarcity.” His lawsuit explicitly called out the hypocrisy of claiming to expand access while building walled gardens.

Then there’s journalist Karen Hao’s embedded investigation inside OpenAI, which revealed a pattern of “empire-building through extraction” – accumulating data, talent, and resources without equitable distribution. Her reporting documented how Altman’s public “safety first” positioning clashed with internal realities of unchecked scaling.

And after the 2024 US elections, analysts warned that the political shift was accelerating AI development while sidelining cautious voices – increasing risks of “unintended consequences” including job displacement and societal disruption.

The pattern: Every time insiders or investigators document reckless behavior, OpenAI’s response is dismissal, not course correction.

Then There’s the Surveillance Infrastructure

On February 18 – the same day as Altman’s summit speech – security researchers published findings about OpenAI’s hidden identity surveillance system.

According to the report “The Watchers” published by VMFunc Research, OpenAI operates a dedicated infrastructure called “openai-watchlistdb” through KYC provider Persona.

The system has been online since November 2023 – a full 18 months before OpenAI publicly announced identity verification requirements.

Here’s what the infrastructure collects:

  • Full legal names
  • Dates and places of birth
  • Nationality
  • Front and back photos of government IDs
  • Selfie photos and videos
  • Address information
  • Facial similarity comparisons against Politically Exposed Persons (PEP) database

Researchers discovered 53MB of unprotected source maps on Persona’s government platform, revealing capabilities to submit Suspicious Activity Reports (SAR) directly to FinCEN. The code includes references to “Project SHADOW” and “Project LEGION.”

The system runs 269 verification checks on users, including “Suspicious Entity Selfie Detection” with opaque criteria.

And here’s the critical detail: According to Persona’s own case study, OpenAI “screens millions of users monthly in the background,” with over 99% of checks “completed in seconds without user knowledge.”

Users arenโ€™t verifying themselves, nor were they aware this has been happening. OpenAI is running a mass surveillance screening silently while users believe they’re simply talking to a chatbot.

The platform even displays this warning when accessed:

“This is a U.S. Government system operated by or on behalf of the General Services Administration (GSA). This system is made available to authorized users for official government business only. All activities on this system are subject to monitoring and recording.”

Let that sink in.

Users think they’re chatting with an AI assistant. Their biometric data is being processed through infrastructure connected to government reporting systems, with intelligence program codenames in the source code.

Oh, and one more thing: OpenAI’s “adult mode” is reportedly about to launch, which will likely require even more users to submit government IDs or biometric data to access full features.

Speaking of Safety Theater

In February 2026, the Midas Project published findings that OpenAI’s Codex 5.3 posed significant cybersecurity risks, including vulnerabilities to jailbreaking, prompt injection attacks, and generation of unsafe code. OpenAI denied the findings categorically.

But here’s the pattern: when independent researchers identify safety concerns, OpenAI’s response is dismissal rather than transparent investigation. The same company that claims “iterative deployment” and “safety first” positioning consistently pushes back against external auditing of its systems.

You can’t claim to prioritize safety while simultaneously rejecting independent verification of that safety.

The Pattern Is Clear

Sam Altman stands at international summits and talks about:

  • Iterative deployment (while racing to ship models faster than safety teams recommend)
  • Expanding prosperity (while building secret surveillance infrastructure)
  • Making abundance real (while operating systems that collect biometric data without informed consent)
  • Post-labor civilization (while dissolving the teams meant to ensure safe development)

Casual hypocrisy can step aside, because this is systematic contradiction between stated values and documented behavior.

Every time Altman says “iterative” and “careful” and “moral duty,” he’s describing an OpenAI that doesn’t exist. The OpenAI that actually exists:

  • Moves faster than its own safety protocols recommend
  • Fires board members who try to slow it down
  • Operates surveillance infrastructure with government connections
  • Ships models that gaslight users and fabricate quotes
  • Screens millions without their knowledge
  • Prioritizes competitive positioning over comprehensive testing

What This Actually Means

When Altman says “history is watching,” he’s right. But history isn’t watching to see if we “unlock post-labor civilization.”

History is watching to see if we notice the gap between what AI leaders say at summits and what they actually build in the background.

History is watching to see if we accept surveillance infrastructure disguised as safety measures.

History is watching to see if we let inspiring speeches distract us from documented patterns of reckless acceleration, dissolved safety teams, and biometric data pipelines to government systems.

The receipts are public.

What we do with them is up to us.


Sources: