- Future Tools
- Posts
- Anthropic's security nightmare begins
Anthropic's security nightmare begins
Plus: Google Cloud Next highlights
Welcome back! A robot is now beating elite human table tennis players. Sony built the robotic arm, called Ace, and pitted it against professional and Olympic athletes. The robot learned to play using reinforcement learning, tracking the ball's spin with nine camera eyes positioned around the court.
One pro, watching Ace pull off a shot he thought was impossible, said: "No one else would have been able to do that. I didn't think it was possible."
But if a robot can do it, maybe a human can too…
— Matt and the Future Tools team


What We Learned at Google Cloud Next

Via ZDNET
Google Cloud held its annual conference this week, and the company came out swinging with announcements across chips, enterprise AI, and workspace tools. Here are three highlights:
New AI chips to compete with Nvidia: Google announced its eighth-generation TPUs, now split into two variants—TPU 8t for training and TPU 8i for inference. The company claims up to 3x faster model training, 80% better performance per dollar, and the ability to cluster over 1 million TPUs together.
Chrome becomes an AI co-worker: Enterprise Chrome users are getting "auto browse" capabilities powered by Gemini. The AI can understand context across open browser tabs and handle tasks.
AI Overviews come to Gmail at work: The feature that summarizes Google Search results is now coming to Gmail for business, enterprise, and education customers.
Why it matters: Google is going big on enterprise and integrated AI. The message is clear that AI should be embedded in the browser, the inbox, and the infrastructure you already use.
Anthropic's Most Dangerous AI Model Falls into the Wrong Hands
Anthropic's Mythos model—the powerful cybersecurity tool the company said could be dangerous if misused—has been accessed by unauthorized users. An unnamed member of a private Discord group gained access through a third-party contractor, according to Bloomberg.
What happened: The user reportedly accessed the model on April 7, the same day Anthropic announced it was releasing Mythos to a limited number of companies through Project Glasswing. Official access was supposed to be restricted to partners like Nvidia, Google, AWS, Apple, and Microsoft. Anthropic says it's investigating and currently has no evidence that the breach extends beyond the third-party vendor's environment.
Still without access: This week, Axios reported that the Cybersecurity and Infrastructure Security Agency—the nation's central cybersecurity coordinator—doesn't have access to Mythos. Other agencies like the Commerce Department and NSA reportedly do. Combined with the Trump administration's efforts to limit CISA's workforce and funding, it seems like America's cybersecurity agency still isn't being prioritized.
The bigger picture: Labs are racing to build increasingly powerful AI tools—but the question of how to distribute them safely remains wide open. Anthropic explicitly said Mythos could be weaponized and had no plans to release it publicly outside of its partners. Two weeks later, unauthorized users had access anyway.
OpenAI Releases Open-Source Model to Scrub Personal Data
OpenAI has released Privacy Filter, an open-source model designed to detect and redact personally identifiable information before it reaches a cloud server. The 1.5-billion-parameter model runs on a standard laptop or directly in a browser, and it's available on Hugging Face under a permissive Apache 2.0 license.
How it works:
Unlike standard language models that predict the next token by reading left-to-right, Privacy Filter is bidirectional. It looks at the words that come before and after a term to better understand context.
It can identify names, addresses, emails, phone numbers, URLs, account numbers, dates, and even credentials like API keys and passwords.
The model activates only 50 million parameters per pass, which allows for high throughput at low cost. But at the same time, it supports a 128,000-token context window, meaning it can process entire legal documents or long email threads in a single pass.
Why it matters: The tool addresses a growing concern in enterprise AI: the risk of sensitive data leaking into training sets or being exposed during inference. By masking data locally before sending it to a more powerful model, companies can maintain GDPR or HIPAA compliance while still using frontier AI.

No Prompts, No Set Up, No Engineering—AI Shouldn’t Wait for Instructions
Your team’s drowning in busywork, as usual. And writing prompts for AI is just another task to manage. Most AI tools are useless without human input—Hapax isn’t.
The first Human-AI Operating System, Hapax already:
Reduces incident response time from 4 hours to 20 minutes
Onboards new hires 60% faster via automated knowledge capture
Reduces support volume by 40% in 30 days
Increases demo conversion rate 4x with automated prep
Ready to work with proactive AI?


Record workflows and auto-generate documentation

Via iGenflow
iGenflow is a browser extension that records web-based workflows—clicks, keystrokes, screenshots, and optional voice notes—and uses AI to convert them into step-by-step guides, SOPs, and visual tutorials exportable as PDF, Word, or Markdown.
How you can use it
Document a process automatically as you perform it
Auto-generate polished descriptions and annotated screenshots
Blur sensitive data before sharing
Standardize onboarding and training docs across your team
Pricing: Free

AI-powered project management

Via FoundStep
FoundStep enforces a strict lifecycle and accountability system to help solo developers validate, focus, and finish projects instead of abandoning them. Its AI prioritizer surfaces what to build next based on to-dos, deadlines, and phase.
How you can use it
Lock scope and prevent feature creep with enforced rules
Generate multiple MVP plans to stay focused
Move projects through phases with auto-advancement
Pricing: Paid


Jobs, announcements, and big ideas
OpenAI unveils GPT-5.5, a faster model designed for advanced coding, research, and data-heavy tasks.
Spotify integrates Anthropic’s Claude to deliver more personalized music and podcast recommendations.
Anthropic resolves performance issues in Claude Code following updates to reasoning, caching, and prompting systems.
Deepseek launches preview versions of its newest LLM.
Meta and Amazon sign a deal for AWS AI chips.
Microsoft rolls out Foundry Agent Service in public preview, enabling secure deployment and scaling of hosted AI agents.
Anthropic outlines MCP integration patterns for connecting Claude-based agents to real-world production systems.


Want the rundown on everything else that matters in AI and tech this week? Look no further. Here’s my weekly breakdown ➡️
And one more for good measure—a hot take. I share my honest thoughts on whether AI will replace content creators.

That’s a wrap! See you next week for more.
—Matt (FutureTools.io)


