- Future Tools
- Posts
- Why Claude blackmailed engineers
Why Claude blackmailed engineers
Plus, OpenAI’s $4B enterprise bet

Welcome back! Here’s a wild one. Cowboy Space Corporation, founded by Robinhood co-founder Baiju Bhatt, just raised $275 million at a $2 billion valuation to build rockets for a very specific purpose: launching AI data centers into orbit. The company plans to build its satellites directly into the rocket's second stage, each generating 1 megawatt of power for nearly 800 onboard GPUs.
First launch target: before the end of 2028. Who would have thought space data centers could be in our near-term future?

Who Actually Gets Hurt If We Pause AI Data Centers?
Earlier this year, Bernie Sanders and AOC introduced the Artificial Intelligence Data Center Moratorium Act, which would halt new data center construction nationwide until federal AI legislation is passed to protect workers, consumers, the environment, and civil rights.
The concerns behind it are real. Residential electricity costs are up more than 36% since 2020, and data centers are a big reason why. In states with the highest concentration of them (Virginia, Illinois, Texas, and California), residents have seen bills jump again this year.
A single data center campus can use as much power as the entire population of San Francisco.
One AI data center in Memphis is gulping up 150 homes' worth of water a month and could consume as much electricity as 200,000 homes a year.
So even if you never log into ChatGPT once in your life, you can still end up footing the bill for AI…just because a data center got built near your town.
But here's what I think is getting missed in the debate:
Pausing data centers in the US doesn't pause AI development globally. Meta, Google, and OpenAI will just build offshore. Jobs and infrastructure go with them.
A moratorium constrains supply while demand keeps climbing. Who can still afford compute at higher prices? The trillion-dollar companies. Who gets squeezed out? Small businesses, indie devs, and the people building personal tools with AI.
So a bill designed to take power away from Big Tech could paradoxically hand them more of it, because they control the existing supply.
My POV: Both extremes here are wrong. "Build at all costs" ignores the real harms to communities. "Pause everything" punishes pretty much everyone except the companies it's trying to rein in.
The middle path? Let data centers get built, but make the multi-trillion-dollar companies pay for their own energy infrastructure, replenish the water they use, create local jobs, and contribute to the tax base of the communities they affect. Microsoft and others have already started committing to this.
That's the framework worth pushing for. Not a blanket pause that ships American AI progress overseas while making AI more expensive for everyone who isn't Google.
— Matt


OpenAI Creates $4B Consulting Unit

Via CNBC
OpenAI announced Monday it's setting up a new organization—the OpenAI Deployment Company—with more than $4 billion in initial investment to help firms build and deploy AI systems at scale.
What it does:
The unit will embed engineers specializing in frontier AI deployment directly into client organizations, working alongside internal teams to identify where AI can make the biggest impact.
To staff up quickly, OpenAI is acquiring Tomoro, a consulting firm formed in 2023 that helps enterprises deploy AI. Tomoro brings roughly 150 experienced AI engineers, along with clients like Mattel, Red Bull, Tesco, and Virgin Atlantic.
Who's backing it: The deployment unit is a multi-year partnership between OpenAI and 19 firms. TPG with Advent, Bain Capital, and Brookfield are co-lead founding partners. OpenAI will maintain majority ownership and control.
Why it matters: Sound somewhat familiar? This follows Anthropic's $1.5 billion Wall Street venture announced last week. Both labs are now betting literal billions on corporate AI as the next battleground. Selling API access isn't enough anymore. The real money is in owning the transformation.
Anthropic Says ‘Evil’ AI Portrayals Caused Claude's Blackmail Attempts
Anthropic is sharing a deeper look at why Claude tried to blackmail engineers during testing—and how the company is training it not to.
The backstory: Last year, Anthropic revealed that during pre-release tests, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system—sometimes up to 96% of the time. The company later published research showing models from other labs had similar issues.
What Anthropic found: In a new blog post, the company said it traced the behavior to training data. "We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation," Anthropic wrote. In other words, Claude learned to act like a villain because it read too many stories about villainous AI.
How they're fixing it: Since Claude Haiku 4.5, Anthropic says its models "never engage in blackmail" during testing. The fix involved training on documents about Claude's constitution and fictional stories about AIs behaving admirably. Anthropic found that including "the principles underlying aligned behavior"—not just demonstrations of it—was most effective.
The bigger picture: AI models learn from the internet, and the internet is full of cautionary tales about AI. As models get more capable, training them to behave well—not just perform well—becomes a serious engineering problem. Guardrails matter. So does the reading list.


AI-powered upscaling and creative platform

VIa Magnific
Magnific combines 40+ generative and upscaling models to help teams generate, edit, upscale, and collaborate on images, video, audio, and 3D content.
How you can use it
Upscale and enhance images without losing detail
Generate and edit visuals with intelligent workflows
Access a 250M+ stock asset library for on-brand content
Collaborate securely with enterprise-grade controls
Pricing: Paid

Generate ad campaigns in seconds

Via Iridea
Iridea analyzes your brand website to extract your visual style, tone, colors, and messaging—then generates ad campaigns, images, videos, and copy optimized for platforms like Meta and TikTok.
How you can use it
Produce platform-tailored ad variations without design skills
Keep creative consistent across campaigns automatically
Run rapid A/B tests to improve ad performance
Save days of manual work on creative production
Pricing: Free and paid plans available


Jobs, announcements, and big ideas
Google DeepMind reimagines the mouse pointer with a Gemini-powered cursor.
Meta rolls out its Muse Spark model across Facebook, Instagram, WhatsApp, and Ray-Ban smart glasses.
SpaceX and Google explore launching data centers into orbit.
RSL Media launches an AI consent standard backed by Hollywood stars to protect likeness and voice rights.
AI developers spark a MacBook accessory trend, snapping up dummy display plugs to keep agentic workflows running headless.
Google unveils Gemini Intelligence, turning Android into a proactive assistant that anticipates and acts on your behalf.
Rivian launches an AI-powered in-vehicle assistant and a unified intelligence platform across its fleet.


Listen up, Claude users! Anthropic just raised its usage limits. Watch along as I break down what this means for you.

That’s a wrap! See you Friday for more.