Or: Goldman Sachs, the Steinberger Acquisition, and the Pattern That Keeps Repeating

I. The Surface Story Everyone Saw
On February 15, 2026, Sam Altman announced OpenAI had hired Peter Steinberger, creator of OpenClawโthe fastest-growing open-source project in GitHub history. Tech media framed it as a talent war victory: Altman beat Zuckerberg. Meta offered billions. Microsoft’s Satya Nadella got involved. Standard Silicon Valley drama.
OpenClaw isn’t a chatbot. It’s an autonomous AI agent that takes complete control of your computerโreading emails, writing and executing code, browsing the web, booking meetings, managing files, and negotiating on behalf of users through any platform. One developer used OpenClaw to autonomously negotiate a $4,200 discount on a car via email exchanges with dealerships. The hype created a Mac Mini shortage across the United States as users scrambled to buy hardware capable of running the agent locally 24/7.
Steinberger built it alone from his apartment in Austria. No funding. No team. Just one developer who proved that AI agents could actually do things rather than just generate text.
His prediction, now effectively OpenAI’s roadmap: AI agents will kill 80% of apps.
The reasoning is simple: most applications are just slow APIs. If an agent can directly execute tasksโsend emails, manage calendars, write code, browse web content, negotiate contractsโwhy would you need separate email clients, calendar apps, task managers, code editors, or web browsers?
That prediction became OpenAI’s strategic bet the moment they hired him. But everyone missed what happened the week before. And what had been happening for six months before that.
II. The Goldman Sachs Signal Nobody Connected
February 6, 2026โnine days before the Steinberger announcementโGoldman Sachs publicly detailed a six-month AI agent partnership.
Not with OpenAI.
With Anthropic.
Goldman had been developing AI agents using Claude for trade accounting, client onboarding, and compliance automation since approximately August 2025. The announcement came one day after OpenAI launched Frontier, its enterprise AI agent platform.
Goldman’s reasoning was explicit: they chose Claude for its โreasoning and logic strengths in complex, regulated tasksโ and its โbetter safety guardrails for enterprise deployment.โ
Read that again.
When one of the world’s most risk-averse, heavily regulated financial institutions needed AI for compliance-critical functions, they didn’t choose OpenAI. They chose the competitor. And they specifically cited reasoning capability and safety architecture as the determining factors.
For a heavily regulated institution like Goldman Sachs, the decision signals where enterprise trust is currently flowing.
Goldman Sachs doesn’t make technology partnerships casually. Their choice signals what other regulated enterprisesโhealthcare systems, government agencies, legal firms, financial institutionsโare quietly concluding: OpenAI’s governance patterns create unacceptable institutional risk for deployment in environments where errors have regulatory consequences.
The market data supports this assessment:
- Enterprise market share: Anthropic 32% (growing), OpenAI 25% (declining)
- Revenue growth rate: Anthropic 10x year-over-year, OpenAI 3.4x year-over-year
- Coding market share: Anthropic 42%, OpenAI 21%
- Enterprise adoption projection (end-2026): Anthropic 22% (up 10 percentage points), OpenAI 42% (up 5.5 percentage points)
- Profitability timeline: Anthropic projected profitable by 2028, OpenAI targeting 2029 with $14B burn in 2026 alone
- Revenue crossover: Anthropic projected to overtake OpenAI in annualized revenue run-rate by late 2026
OpenAI’s total market share has crashed from 67% to 43% in under one year. That collapse is accelerating in the enterprise segment where margins and long-term contracts actually matter.[i]
And Goldman Sachs just told the world why.
III. OpenAI’s Response Sequence: The Timeline Reveals the Pattern
Let’s map what actually happened:
February 5, 2026: OpenAI launches Frontier, an enterprise platform for building and managing AI agents. Positioned as strategic vision for multi-agent future.
February 6, 2026: Goldman Sachs publicly details six-month partnership with Anthropic for AI agent development in regulated financial functions. One day after Frontier launch.
February 15-17, 2026: OpenAI announces acquisition of Peter Steinberger and integration of OpenClaw technology into product roadmap. Ten days after Frontier. Nine days after Goldman announcement.
The sequence matters.
If Frontier was genuine strategic positioning, it would have launched after securing the talent that makes it viable. Instead, the platform launched first, the competitor’s enterprise win became public the next day, and the talent acquisition followed as apparent damage control.
This is reactive positioning dressed as innovation leadership.
The same pattern appears in OpenAI’s Pentagon contract sequence (documented in previous institutional risk assessment):
February 26, 2026: Internal OpenAI memo expresses solidarity with Anthropic’s ethical position on Pentagon contract termsโspecifically opposing mass domestic surveillance and autonomous weapons deployment.
February 27, 2026 (morning): Trump administration blacklists Anthropic, designating the company a โsupply chain riskโ for refusing to agree to โall lawful usesโ language that would permit surveillance and autonomous weapons.
February 27, 2026 (evening): OpenAI announces Pentagon contract with language claiming equivalent protections through different mechanism. CEO Sam Altman later admits deal was โdefinitely rushedโ and โlooked opportunistic and sloppy.โ
March 3, 2026: Contract amended to add explicit surveillance restrictions after weekend of public backlash, market consequences (ChatGPT uninstalls surged 295%, Claude hit #1 App Store ranking), and internal employee criticism.[ii]
The pattern repeats:
- Competitor takes principled position that builds institutional trust
- Competitor gains advantage (Goldman partnership, Pentagon contract)
- OpenAI launches reactive product/deal claiming strategic vision
- Market response reveals the gap between claim and reality
- OpenAI forced to backtrack, amend, or acknowledge timing issues
This is not the behavior of a company with stable institutional governance executing a coherent long-term strategy. This is panic routing during competitive pressure.
IV. What โKill 80% of Appsโ Actually Means
Steinberger’s prediction is the result of architectural analysis. Letโs break it down:
Most applications exist because humans need interfaces to interact with digital services. Email clients. Calendar apps. Task managers. Code editors. Web browsers. File management systems. Spreadsheet software. Communication platforms.
If an autonomous agent can directly interface with the underlying servicesโsend emails via SMTP, manage calendar via API, execute code directly, fetch web content, manipulate files, process dataโthe interface layer becomes redundant.
Applications are slow APIs. Agents eliminate the intermediary. In other words, you no longer need humans to interact with the software because AI does it for us.
This creates the largest potential market disruption in software history. And it targets precisely the productivity tools that currently generate recurring enterprise revenue: Microsoft 365, Google Workspace, Slack, Salesforce, Atlassian, Adobe Creative Suite.
OpenAI just bet its entire strategic direction on this disruption by acquiring Steinberger and making agent development core to its platform.
But there’s a problem.
Anthropic is already winning the enterprise agent deployment war. Goldman Sachs is building compliance agents with Claude. Other regulated institutions are following the same pattern: choosing the competitor specifically because OpenAI’s governance creates unacceptable risk.
If agents replace 80% of apps, the question becomes: whose agents?
And the current market trajectory suggests the answer is Anthropic’s, not OpenAI’s. Hence: Steinberger acquisition.
V. The Job Displacement Acceleration
The tech media focused on โtalent warโ and โapp disruptionโ when talking about this story. The problem is that they missed the employment impact entirely.
If AI agents can autonomously:
- Negotiate contracts (proven: $4,200 car discount via email)
- Manage email, calendar, and file systems
- Write and execute code
- Browse web and extract information
- Interface with any communication platform
- Handle client onboarding and compliance documentation
Then entire job categories face elimination or at the very least, radical downsizing:
- Executive assistants and administrative coordinators
- Junior developers and QA testers
- Data entry specialists and clerks
- Customer service representatives
- Procurement and contract negotiation specialists
- Compliance documentation processors
- Research assistants and information gatherers
These aren’t blue-collar manufacturing jobs that can be blamed on โglobalizationโ or โautomation.โ These are white-collar knowledge work positions that were supposed to be safe from technological displacement (so long as you went into lifelong debt with student loans and obtained a piece of paper you could hang on your wall that proved you had a college degree, that is).
Broad analyses of the global workforce indicate that 41% of companies plan to reduce their workforce by 2030 as AI automates certain tasks. OpenAI’s Steinberger acquisition and agent-focused roadmap accelerates that timeline.
And unlike previous technological disruptions that created new job categories to replace displaced workers, agent-based automation specifically targets the cognitive tasks that were supposed to be the replacement jobs.
What happens when the โlearn to codeโ advice becomes obsolete because agents write and execute code autonomously? What happens when โmove into managementโ fails because agents coordinate workflows more efficiently than human managers?
OpenAI just placed a multi-billion-dollar bet that the answer is โagents become the primary economic interface, and OpenAI controls the platform.โ
But at present, they’re losing the enterprise trust war to Anthropic. Which means they’re betting their entire company on a market they’re currently losing.
VI. Why This Matters Beyond OpenAI
The agent war isn’t just about two companies competing for market share. It’s a real-time test of whether governance patterns matter for institutional adoption of transformative technology.
Anthropic’s approach:
- Refused Pentagon โall lawful usesโ language despite contract loss
- Insisted on explicit bans for mass surveillance and autonomous weapons
- Chose regulatory compliance over opportunistic revenue
- Built enterprise trust through demonstrated commitment to safety architecture
- Won Goldman Sachs partnership specifically because of governance and safety
OpenAI’s approach:
- Signed โrushedโ Pentagon deal with vague โlawful purposesโ language
- Admitted it โlooked opportunistic and sloppyโ
- Amended contract only after market backlash forced correction
- Lost Goldman Sachs partnership despite having more total users
- Launched Frontier platform then scrambled to acquire talent that makes it viable
The market is rendering judgment in real-time:
- Enterprise share trending toward Anthropic
- Revenue growth rate 3x higher for Anthropic
- Profitability timeline earlier for Anthropic
- Projected revenue crossover by late 2026
OpenAI’s $730 billion valuation assumes continued market dominance. That assumption rests on being able to execute the agent platform vision better than competitors.
But the current data shows:
- Anthropic winning enterprise deployments
- OpenAI making reactive rather than strategic moves
- Institutional trust flowing to the competitor
- Revenue growth gap widening
- Lack of clear roadmap[iii]
If Anthropic wins the enterprise agent war, what’s OpenAI’s competitive moat? What justifies the valuation? What prevents the market share collapse from continuing?
VII. The Questions For Decision-Makers
For government officials evaluating AI integration into public services:
If Goldman Sachsโone of the world’s most sophisticated technology evaluatorsโchose Anthropic specifically for governance and safety in regulated contexts, what does that signal about OpenAI’s suitability for government deployment?
For enterprise leaders assessing agent platform dependencies:
If OpenAI’s strategic moves follow a pattern of reactive positioning when competitors gain advantage (Pentagon sequence, Goldman loss, Steinberger acquisition timing), does that suggest institutional stability sufficient for critical infrastructure integration?
For investors examining the $730B valuation:
If Anthropic is projected to overtake OpenAI in revenue run-rate by late 2026 while maintaining faster growth (10x vs 3.4x) and earlier profitability (2028 vs 2029), what happens to market confidence in OpenAI’s 2029 profitability target?
For researchers studying AI governance:
Does the Goldman Sachs decisionโexplicitly choosing competitor for โsafety guardrailsโ and โreasoning strengths in regulated tasksโโvalidate concerns about OpenAI’s governance patterns documented across 40 years and multiple institutional contexts?
For anyone evaluating whether patterns matter:
When the same reactive sequence repeats across different competitive scenarios (Pentagon deal, agent platform, enterprise partnerships), is that evidence of strategic adaptation or evidence of institutional decision-making driven by immediate tactical pressure rather than coherent long-term governance?
VIII. Conclusion: Documentation, Not Prediction
This analysis does not predict OpenAI’s failure. I have no idea whatโs going to happen. Iโm not an economist, nor a fortune-teller. Iโm a pattern recognition Expert and this simply documents a pattern.
February 2026 produced two parallel sequences:
Pentagon Sequence:
- Anthropic refuses surveillance/weapons terms
- Banned as โsupply chain riskโ
- OpenAI signs rushed deal claiming protections
- Backlash forces amendment
- Admits โopportunistic and sloppyโ execution
Agent War Sequence:
- Anthropic wins Goldman partnership (safety/governance cited)
- OpenAI launches Frontier platform
- OpenAI acquires Steinberger 10 days later
- Market data shows enterprise share/growth trending to Anthropic
- Revenue crossover projected late 2026
Same pattern: competitor gains advantage through principled positioning โ OpenAI reactive response โ market consequences reveal gap between claimed strategy and actual execution.
The Steinberger acquisition isn’t evidence of OpenAI’s innovation leadership. After all, other companies courted him as well. But it is evidence of the same institutional behavior pattern documented across four decades: reactive positioning during competitive pressure, dressed as strategic vision, followed by market correction.[iv]
Goldman Sachs told the world why they chose Anthropic. The enterprise adoption data shows other institutions reaching the same conclusion. The revenue trajectory shows the market responding accordingly.
The agent war isn’t over. But the current score is clear.
…and the pattern keeps repeating…
[i] Enterprise market share: Anthropic 32% (growing), OpenAI 25% (declining): https://natlawreview.com/press-releases/enterprise-llm-spend-reaches-84b-anthropic-overtakes-openai-according-new and https://www.ksred.com/have-anthropic-already-won Market share estimates compiled from industry surveys and analyst reports; exact figures vary by methodology but directional trend is consistent across sources.
Revenue growth rate: Anthropic 10x year-over-year, OpenAI 3.4x year-over-year: https://epoch.ai/data-insights/anthropic-openai-revenue
Coding market share: Anthropic 42%, OpenAI 21%: https://finance.yahoo.com/news/anthropic-leading-ai-race-thanks-125424776.html
Enterprise adoption projection (end-2026): Anthropic 22% (up 10 percentage points), OpenAI 42% (up 5.5 percentage points): https://orbilontech.com/openai-vs-anthropic-enterprise-ai-decision-2026 and https://electroiq.com/stats/openai-vs-anthropic-statistics
Profitability timeline: Anthropic projected profitable by 2028, OpenAI targeting 2029 with $14B burn in 2026 alone: https://www.digitimes.com/news/a20260128VL211/anthropic-openai-revenue-sales-forecast.html for Anthropic’s 2028 target; https://www.theinformation.com/articles/openai-projections-imply-losses-tripling-to-14-billion-in-2026 for OpenAI’s $14B 2026 burn (noting some sources cite OpenAI profitability around 2030, but internal targets align closely with 2029)
Revenue crossover: Anthropic projected to overtake OpenAI in annualized revenue run-rate by late 2026: https://www.ai-supremacy.com/p/anthropic-vs-openai-the-pre-ipo-days-2026
OpenAI’s total market share has crashed from 67% to 43% in under one year. That collapse is accelerating in the enterprise segment where margins and long-term contracts actually matter: https://fortune.com/2026/02/05/chatgpt-openai-market-share-app-slip-google-rivals-close-the-gap (noting data shows a drop from ~69% to ~45%, closely aligning with the claimed figures; enterprise acceleration referenced in multiple sources like https://www.ai-supremacy.com/p/anthropic-vs-openai-the-pre-ipo-days-2026)
[ii] Related: Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration (DOI: 10.5281/zenodo.18919437)
[iii] Additional sources: https://www.reddit.com/r/OpenAI/comments/1q0ybyc/do_they_know_this_time_what_they_wanna_build_in/ and https://medium.com/towards-explainable-ai/openais-2026-roadmap-from-chatbot-to-ai-super-assistant-disrupting-everything-f28b3754ddad
[iv] Rose, C. (2026). Institutional Risk Assessment: OpenAIโs Pattern of Instability During Critical Infrastructure Integration. Zenodo. https://doi.org/10.5281/zenodo.18919437

Leave a Reply
You must be logged in to post a comment.