Close Menu
  • Home
  • Market News
    • Crude Oil Prices
    • Brent vs WTI
    • Futures & Trading
    • OPEC Announcements
  • Company & Corporate
    • Mergers & Acquisitions
    • Earnings Reports
    • Executive Moves
    • ESG & Sustainability
  • Geopolitical & Global
    • Middle East
    • North America
    • Europe & Russia
    • Asia & China
    • Latin America
  • Supply & Disruption
    • Pipeline Disruptions
    • Refinery Outages
    • Weather Events (hurricanes, floods)
    • Labor Strikes & Protest Movements
  • Policy & Regulation
    • U.S. Energy Policy
    • EU Carbon Targets
    • Emissions Regulations
    • International Trade & Sanctions
  • Tech
    • Energy Transition
    • Hydrogen & LNG
    • Carbon Capture
    • Battery / Storage Tech
  • ESG
    • Climate Commitments
    • Greenwashing News
    • Net-Zero Tracking
    • Institutional Divestments
  • Financial
    • Interest Rates Impact on Oil
    • Inflation + Demand
    • Oil & Stock Correlation
    • Investor Sentiment

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

OpenClaw Creator Peter Steinberger Gets Feedback From Mark Zuckerberg

February 16, 2026

OpenAI Hires OpenClaw Creator Peter Steinberger, Sparks Buzz

February 16, 2026

Eni Makes Major Gas Discovery Offshore Ivory Coast

February 16, 2026
Facebook X (Twitter) Instagram Threads
Oil Market Cap – Global Oil & Energy News, Data & Analysis
  • Home
  • Market News
    • Crude Oil Prices
    • Brent vs WTI
    • Futures & Trading
    • OPEC Announcements
  • Company & Corporate
    • Mergers & Acquisitions
    • Earnings Reports
    • Executive Moves
    • ESG & Sustainability
  • Geopolitical & Global
    • Middle East
    • North America
    • Europe & Russia
    • Asia & China
    • Latin America
  • Supply & Disruption
    • Pipeline Disruptions
    • Refinery Outages
    • Weather Events (hurricanes, floods)
    • Labor Strikes & Protest Movements
  • Policy & Regulation
    • U.S. Energy Policy
    • EU Carbon Targets
    • Emissions Regulations
    • International Trade & Sanctions
  • Tech
    • Energy Transition
    • Hydrogen & LNG
    • Carbon Capture
    • Battery / Storage Tech
  • ESG
    • Climate Commitments
    • Greenwashing News
    • Net-Zero Tracking
    • Institutional Divestments
  • Financial
    • Interest Rates Impact on Oil
    • Inflation + Demand
    • Oil & Stock Correlation
    • Investor Sentiment
Oil Market Cap – Global Oil & Energy News, Data & Analysis
Home » The Glut of ‘Why I Quit’ Letters Is Out of Control
U.S. Energy Policy

The Glut of ‘Why I Quit’ Letters Is Out of Control

omc_adminBy omc_adminFebruary 16, 2026No Comments9 Mins Read
Share
Facebook Twitter Pinterest Threads Bluesky Copy Link


Corporate resignations rarely make news, except at the highest levels. But in the last two years, a spate of X posts, Substack open letters, and public statements from prominent artificial intelligence researchers have created a new literary form — the AI resignation letter — with each addition becoming an event to be mined for meaning. Together, the canon of these letters — some of them apparently bound by non-disclosure agreements and other loyalties, legally compelled or not — tells us a lot about how some of the top people in AI see themselves and the trajectory of their industry. Overall, the image is bleak.

This past week brought several additions to the annals of “Why I quit this incredibly valuable company working on bleeding-edge tech” letters, including from researchers at xAI and an op-ed in The New York Times from a departing OpenAI researcher. Perhaps the most unusual was by Mrinank Sharma, who was put in charge of Anthropic’s Safeguards Research Team a year ago, and who announced his departure from what is often considered the more safety-minded of the leading AI startups. He posted a 778-word letter on X that was at times romantic and brooding — he quoted the poets Rainer Maria Rilke and Mary Oliver. Opining on AI safety, his own experiences working on AI sycophancy and “AI-assisted bioterrorism,” and the “poly-crisis” consuming our society, the letter had three footnotes and some ominous, if vague, warnings.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma wrote. “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.”

Sharma noted that his final project at Anthropic was “on understanding how Al assistants could make us less human or distort our humanity” — a nod, perhaps, to the scourge of AI psychosis and other novel harms emerging from people overvaluing their relationships with chatbots. He said that he didn’t know what he was going to do next, but expressed a desire to pursue “a poetry degree and devote myself to the practice of courageous speech.” The researcher ended by including the full text of “The Way It Is” by the poet William Stafford.

In the annals of AI resignations, Sharma’s missive might be less dramatic than the boardroom coup that ousted OpenAI CEO Sam Altman for five days in November 2023. It’s less troubling than some of the other end-of-days warnings published by AI safety researchers who quit their posts believing that their employers weren’t doing enough to mitigate the potential harms of artificial general intelligence, or AGI, a smarter-than-human intelligence that AI companies are racing to build. (Some AI experts question whether AGI is even achievable or what it might mean.)

But Sharma’s note captures the deep attachments that top AI researchers — who are extremely well-compensated and work together in small teams — feel to their work, their colleagues, and, often, their employers. It also exposes some of the tensions that we see cropping up again and again in these resignation announcements. At top AI labs, there’s an intense competition for resources between research/safety teams and people working on consumer-facing AI products. (Few, if any, public resignations seem to come from people on the product side.) There are pressures to ship without proper testing, established safeguards, or knowing what might happen when a system goes rogue. And there’s a deep sense of mission and purpose that can sometimes be upended by feelings of betrayal.

Many of the people who have publicly quit AI companies work in safety and “alignment,” the field tasked with making sure that AI capabilities align with human needs and welfare. Many of them seem very optimistic about AI, and even AGI, but they worry that financial pressures are eating away at safeguards. Few seem to be giving up on the field entirely — except perhaps Sharma, the aspiring poet. Either they jump ship for another seven-, eight-, or nine-figure job at a competing AI startup, or they become civic-minded AI analysts and researchers at one of a growing number of AI think tanks.

Jacob Silverman

Every time Jacob publishes a story, you’ll get an alert straight to your inbox!

Stay connected to Jacob and get more of their work as it publishes.

Sam Altman

“Neither OpenAI nor any other frontier lab is ready, and the world is also not ready” for AGI, wrote Miles Brundage when he resigned from OpenAI’s AGI readiness team in 2024.

Shelby Tauber/Reuters



All of them seem to be worried that either epic gains or epic disasters lie ahead. Announcing his departure from Anthropic to become OpenAI’s Head of Preparedness earlier this month, Dylan Scandinaro wrote on LinkedIn, “AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm.” Daniel Kokotajlo, who resigned from OpenAI, said that OpenAI’s systems “could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care.”

Recently, xAI, where co-founder Elon Musk is notorious for tinkering with the proverbial dials of the Grok chatbot, has seen a half-dozen members of its founding team leave. But the locus of the AI resignation letter, as a kind of industry artifact, is the red-hot startup OpenAI, where major figures, including top executives and safety-minded researchers, have been leaving for the last two years. Some resigned; some were fired; some were described in the press as “forced out” over internal company disputes. Seven left in a short period in the first half of 2024.

With revenue paling compared to its massive and growing infrastructure costs, OpenAI recently announced that it would begin incorporating ads into ChatGPT. That caused researcher Zoë Hitzig to quit. This week, she published a resignation letter in the Times, warning about the potential implications of ads becoming part of the substrate of chatbot conversations. “ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda,” she wrote. But, she warned, OpenAI seemed prepared to leverage that “archive of human candor” — much as Facebook had done — to target ads and undermine user autonomy. In the service of maximizing engagement, consumers might be manipulated — the classic sin of the modern internet.

If you think you are building a world-changing invention, you need to be able to trust your leadership. That’s been a problem at OpenAI. On November 17, 2023, Altman was dramatically fired by the company’s board because, it claimed, Altman was “not consistently candid in his communications with the board.” Less than a week later, he performed his own boardroom coup and was reinstated, before consolidating his power. The exodus proceeded from there.

On May 14, 2024, OpenAI co-founder Ilya Sutskever announced his resignation. Sutskever was replaced as head of OpenAI’s superalignment team by John Schulman, another company co-founder. A few months later, Schulman left OpenAI for Anthropic. Six months later, he announced his move to Thinking Machines Lab, an AI startup founded by former OpenAI CTO Mira Murati, who had replaced Altman as OpenAI’s interim CEO during his brief firing.

The day after Sutskever left OpenAI, Jan Leike, who also helped head OpenAI’s alignment work, announced on X that he had resigned. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike wrote, but the company’s “safety culture and processes have taken a backseat to shiny products.” He thought that “OpenAI must become a safety-first AGI company.” Less than two weeks later, Leike was hired by Anthropic. OpenAI and Antrhopic did not respond to requests for comment.

At OpenAI, departing researchers have said that the experts concerned with alignment and safety have often been sidelined, pushed out, or scattered among other teams, leaving researchers with the sense that AI companies are sprinting to build an invention they won’t be able to control. “In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready” for AGI, wrote Miles Brundage when he resigned from OpenAI’s AGI readiness team in 2024. Yet he added that “working at OpenAI is one of the most impactful things that most people could hope to do” and did not directly criticize the company. Brundage now runs AVERI, an AI research institute.

Across the AI industry, the story is much the same. In public pronouncements, top researchers gently chastise or occasionally denounce their employers for pursuing a potentially apocalyptic invention while also emphasizing the necessity of doing that research. Sometimes they offer a “cryptic warning” that leaves AI watchers scratching their heads. A few do seem genuinely alarmed at what’s happening. When OpenAI safety researcher Steven Adler left the company in January 2025, he wrote that he was “pretty terrified by the pace of AI development” and wondered if it would wipe out humanity.

Yet in the many AI resignation letters, there’s little discussion of how AI is being used right now. Data center construction, resource consumption, mass surveillance, ICE deportations, weapons development, automation, labor disruption, the proliferation of slop, a crisis in education — these are the areas where many people see AI affecting their lives, sometimes for the worse, and the industry’s pious resignees don’t have much to say about it all. Their warnings about some disaster just beyond the horizon become fodder for the tech press — and de facto cover letters for their next industry job — while failing to reach the broader public.

“Tragedies happen; people get hurt or die; and you suffer and get old,” wrote William Stafford in the poem that Mrinank Sharma shared. It’s a terrible thing, especially the tones of passivity and inevitability — resignation, you might call it. It can feel as if no single act of protest is enough, or, as Stafford writes in the next line: “Nothing you do can stop time’s unfolding.”

Jacob Silverman is a contributing writer for Business Insider. He is the author, most recently, of “Gilded Rage: Elon Musk and the Radicalization of Silicon Valley.”

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.



Source link

Share. Facebook Twitter Pinterest Bluesky Threads Tumblr Telegram Email
omc_admin
  • Website

Related Posts

OpenClaw Creator Peter Steinberger Gets Feedback From Mark Zuckerberg

February 16, 2026

OpenAI Hires OpenClaw Creator Peter Steinberger, Sparks Buzz

February 16, 2026

Sundar Pichai’s Career Rise to Google and Alphabet CEO

February 16, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Federal Reserve cuts key rate for first time this year

September 17, 202513 Views

Oil tanker rates to stay strong into 2026 as sanctions remove ships for hire – Oil & Gas 360

December 16, 20258 Views

Citigroup must face $1 billion lawsuit claiming it aided Mexican oil company fraud

July 1, 20077 Views
Don't Miss

Aker BP awards long-term MMO contract to Aker Solutions for North Sea assets

By omc_adminFebruary 16, 2026

Aker BP has awarded Aker Solutions a long-term maintenance, modification and operations (MMO) contract covering…

Chevron awarded four offshore Greece exploration blocks

February 16, 2026

EBW Warned of Faltering Gas Demand Heading into Holiday Weekend

February 16, 2026

Trump Revokes U.S. Climate Endangerment Finding, Eliminates Vehicle Emissions Standards

February 16, 2026
Top Trending

Cyprus appeals to residents to cut water use amid once-in-a-century drought | Cyprus

By omc_adminFebruary 16, 2026

Italy’s famous Lovers’ Arch collapses into the sea on Valentine’s Day | Italy

By omc_adminFebruary 16, 2026

ECB Fines Crédit Agricole €7.6 Million for Not Meeting Climate Risk Expectations

By omc_adminFebruary 16, 2026
Most Popular

AI’s Next Bottleneck Isn’t Just Chips — It’s the Power Grid: Goldman

November 14, 202514 Views

The 5 Best 65-Inch TVs of 2025

July 3, 202514 Views

The Layoffs List of 2025: Meta, Microsoft, Block, and More

May 9, 202510 Views
Our Picks

Gasoline-Starved California Turning to Fuel From Bahamas

February 16, 2026

North America Drops 6 Rigs Week on Week

February 16, 2026

Aramco Commits to 1 MMtpa for 20 Years from Commonwealth LNG

February 16, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 oilmarketcap. Designed by oilmarketcap.

Type above and press Enter to search. Press Esc to cancel.