- Future Tools
- Posts
- No-win?
No-win?
Plus, OpenAI gets sued for > $1B
Welcome back! There’s a lot to unpack this week...my vote? We dig right in. We’ve got a new section for you here—hit reply and tell me what you think. Time to rock.

Google’s Pentagon AI Deal Puts AI Labs in a No-Win Situation
Google just signed a deal giving the Pentagon access to its AI models for classified work.
Why it’s interesting: We recently saw Anthropic deny the Pentagon access after the Pentagon wouldn’t agree to Anthropic’s two red lines—no fully autonomous weapons and no spying on American citizens.
Shortly after that, OpenAI swooped in and basically said, “We’ll do a deal with the Pentagon.”
The weird thing? OpenAI claimed it had the same red lines: no autonomous weapons and no spying on US citizens.
During this whole debacle, the Pentagon deemed Anthropic a supply chain risk, meaning Anthropic couldn’t be used downstream in government projects. Now, this week, we get the news that Google essentially said, “You can use our models for all lawful purposes.”
Google doesn’t appear to have the same hard red lines. They’re saying their models aren’t designed for surveillance or autonomous weapons, but they’re not explicitly saying the Pentagon can’t use them for those purposes.
That’s a really interesting move, especially because OpenAI got a ton of backlash when it came to light that they were willing to work with the Pentagon after Anthropic wouldn’t.
Anthropic was sort of seen as the security hero for holding the line and not backing down to the US government.
This also comes after hundreds of Google employees reportedly urged the company not to make the deal. So there’s probably going to be backlash both inside and outside of Google.
My POV?
Honestly, I think what’s happening with Google is more of a reflection on the US government than it is on Google. The government is essentially saying, “If you don’t bend to our will and let us use these models how we want to use them, we can deem you a supply chain risk, block you from government work, make it harder for other companies to do business with you, and seriously damage your company.”
These big foundation labs are operating in a world where the legislation is still being written. There are still a lot of legal gray areas. So it’s in their best interest to align with the government—because the government is ultimately going to help shape the rules they have to follow.
But to the everyday person, it can look like these companies are just bending to the will of the government.
The question I have: Are people over-indexing on how powerful these AI models are right now? Most of the government and Pentagon use cases today are probably things like data analysis and information summary. For the most part, these models aren’t being used as the brains of autonomous weapons (yet).
But labs like Anthropic are trying to get ahead of that potential future.
Big picture: Personally, I don’t like the idea of AI being used for autonomous weapons or spying on US citizens. At the same time, it’s really hard to take a clean side on this specific issue because I can honestly understand both arguments.
Google is probably in a position where it feels like it doesn’t have much of a choice—hands are being forced.
Either it works with the government, or the government starts rallying against it.
It’s kind of a no-win situation for all of the companies involved. Anthropic, OpenAI, Google, xAI, and really any company building frontier AI is going to have this same problem looming over them.
And personally, I would not want to be the one making these decisions.
— Matt


Seven Families Sue OpenAI for Over $1 Billion
A lawsuit filed this week could set a landmark precedent for AI liability—and it comes from one of the most tragic events in recent memory. Seven families affected by the February 2026 mass shooting in Canada are suing OpenAI and CEO Sam Altman for more than $1 billion in damages.
What happened:
Eight months before the shooting, OpenAI's automated systems flagged an account belonging to 18-year-old Jesse Van Rootselaar for "gun violence activity and planning." The company's safety team reviewed the content and urged leadership to notify law enforcement.
According to the suit, leadership declined—choosing only to deactivate the account. Rootselaar created a second account and continued using ChatGPT to plot the shooting.
The core claim: The families allege OpenAI had both the knowledge and the obligation to alert police—and made an active decision not to. Sam Altman is named personally in the suit alongside the company. A spokesperson from OpenAI called the shooting “a tragedy,” and Sam Altman said he was “deeply sorry” that the account was not flagged to law enforcement.
Why it matters: Most AI liability cases center on bias, hallucinations, or intellectual property. This one is different—it's a wrongful death claim rooted in a failure to act on a known, specific threat. The AI safety debate is getting very real, very fast.
ChatGPT's Download Problem Couldn't Come at a Worse Time
OpenAI is quietly laying the groundwork for an IPO. So the timing couldn't be worse: new data shows ChatGPT downloads have slowed sharply, and the trend lines are heading in the wrong direction.
The numbers:
ChatGPT's year-over-year download growth came in at just 14%—while rival Claude's downloads grew 11x over the same period.
Uninstalls spiked 132% year-over-year in April, with an even sharper 413% jump the month before.
OpenAI reportedly missed internal targets for new users and revenues (although the CFO disputed that), and it’s planning a lower-priced "ChatGPT Go" tier to reach 112 million new users by year-end.
What's behind the slowdown: Rival chatbots are maturing fast, and market saturation is setting in. At the same time, OpenAI has ramped up advertising inside ChatGPT—nearly 14% of users were seeing ads by late April, up from just 1% in March. That shift appears to be pushing some users out the door. Plus, the Pentagon deal I mentioned at the beginning alienated some users too.
Problematic timing: OpenAI is targeting a Q4 2026 public listing, according to the Wall Street Journal. Slowing downloads, rising uninstalls, and a scramble to introduce cheaper tiers are not exactly the growth metrics a pre-IPO company wants surfacing in the press.
The bigger picture: Numbers don't lie—people are voting with their downloads (or lack thereof). The real question heading into the IPO isn't whether ChatGPT is still the market leader. It's whether its lead over the competition is durable enough to justify the valuation.
Amazon Had a Monster Quarter Thanks to AI
Amazon dropped Q1 2026 earnings, and the headline numbers were strong. But the more significant story is what the results reveal about Amazon's AI strategy.
Three highlights ➡️
AWS revenue grew 28% year-over-year to $37.6 billion, beating analyst estimates. AI services within AWS are now running at more than $15 billion annually.
The company forecasted $200 billion in capital expenditures on AI infrastructure for 2026.
Amazon entered the superagent race with Claude Cowork now available natively in Amazon Bedrock, and the launch of Amazon Quick—a $20/month desktop AI assistant.
The bigger picture: Amazon is no longer just the cloud landlord. It's simultaneously the infrastructure provider for OpenAI and Anthropic, the hardware manufacturer building their chips, and now an active competitor in the agent and productivity software market. It makes you wonder, what won’t Amazon do?

Introducing Ghost—A Database Your AI Agent Can Actually Drive
Ghost is a new kind of Postgres database designed for AI agents from the ground up. Instead of you setting up and managing database infrastructure, your agent handles it. Spin up a database, fork it, run queries, tear it down when you're done—all through MCP, all without touching a config file.
It works natively with Claude Code and other agent tools, and getting started takes about 30 seconds.
What makes it different:
1 TB of free storage and unlimited databases, with hard spending caps so you never get a surprise bill
Your agent can authenticate and operate the database
Fork any database instantly to run experiments in parallel, then keep what works
No limitations on projects
The mental shift people describe once they start using it: When databases are free and disposable, you stop treating them carefully. You run 10 experiments at once. You let the agent figure out the schema. It changes how you build.


An all-in-one automation platform for X

Via Xquik
Xquik gives you 40+ tools to extract tweets, followers, replies, Spaces, and communities—plus write actions like posting, liking, retweeting, and DMing—all via REST API, webhooks, and an MCP server that works with Claude, ChatGPT, Cursor, and Copilot.
How you can use it
Let MCP-compatible AI agents call Xquik directly to pull live X context into workflows
Run automated pipelines for content generation, moderation, and giveaways
Monitor competitor accounts and export data without managing scrapers
Integrate X data into custom AI apps without hitting API rate limits
Pricing: Paid

AI-powered scheduling without the back-and-forth

Via Wellpin
Wellpin eliminates meeting coordination friction by letting you share real-time availability through a personalized link, with automatic conflict detection and reminders.
How you can use it
Share a link so contacts can book without email chains
Automatically detect and block scheduling conflicts across calendars
Set up group scheduling for team interviews or events
Send automated reminders to cut down on no-shows
Pricing: Free


Jobs, announcements, and big ideas
Anthropic pushes to expand access to its Mythos AI model, drawing opposition from the White House over national security concerns
Perplexity expands Comet's Computer with Microsoft Teams, an Excel beta, automated workflows, and 1Password security integration.
OpenAI rolls out Advanced Account Security for ChatGPT and Codex, adding passkeys and hardware key support.
Spotify introduces a "Verified by Spotify" badge to confirm artists are humans, not AI.
X plans a full AI-powered rebuild of its ads platform, with a new Ads Manager rolling out starting April 2026.
Mistral debuts Vibe, a set of remote cloud coding agents powered by a preview of its Medium 3.5 model.


Here’s your rundown of the week’s biggest AI news—look no further:
TK

That’s a wrap! See you next week for more.
—Matt (FutureTools.io)


