Go big or go home

Meta is looking for nuclear opps

Welcome back! Spotify just launched a bold new feature in Wrapped: an AI-generated podcast tailored to your music taste and listening habits. Imagine a virtual co-host recapping your year with insights into your top tracks, artists, and even mood shifts throughout the seasons.

Is this the future of personalized audio, or is Spotify just having a little fun with AI?

P.S. Thanks for participating in our recent polls. Since you’ve been so willing to answer my questions…I’m returning the favor. Got a question for me about AI, tech, or the future? I’d love to answer it in an upcoming edition.

Amazon’s Nova AI processes text, images, and videos

Via Amazon

Amazon’s Nova AI models are here. These new multimodal models are designed to process text, images, and videos, making them some of the most versatile tools in Amazon’s AI lineup.

What Nova can do:

  • Text: Language tasks like summarization, translation, and answering questions help businesses sift through massive amounts of information faster.

  • Visual: Nova’s image and video models bring AI to industries like retail and logistics, where tasks like inventory tracking, automated video tagging, or visual search are essential.

What’s next for Nova: Amazon plans to integrate these models into AWS, giving businesses access to powerful AI tools that can handle multimedia tasks seamlessly. This could mean smarter customer service, real-time video analysis, or automated workflows across industries like e-commerce and logistics.

The bigger trend: Multimodal AI is becoming the industry standard, with competitors like Google’s Gemini and OpenAI’s emerging tools also chasing this frontier. 

Why? Our world is multimodal—we communicate through words, visuals, and even video all at once—and text-only models are no longer enough. Nova’s ability to synthesize text, images, and video reflects how AI is evolving to process data in more human-like ways. It’s not just about smarter AI. It’s about making AI practical for how we live and work.

Meta’s AI is going nuclear

ESG News

Meta’s AI ambitions are massive—and now, it’s going nuclear. The company is reportedly searching for nuclear developers to help fuel the skyrocketing energy demands of its AI projects.

Why nuclear energy? AI is energy-hungry, with data centers consuming enormous amounts of power to train and run models. Nuclear power is emerging as a solution that offers an alternative to fossil fuels and keeps these massive systems running.

Meta isn’t alone: Earlier this year, Amazon acquired a nuclear-powered data center campus to support its AI operations. Microsoft has also explored small modular nuclear reactors to power its cloud infrastructure. 

The big picture: AI’s environmental impact is becoming harder to ignore. Nuclear energy could provide a relatively more sustainable way to meet growing demands, but scaling it comes with challenges. High costs, regulatory hurdles, and public perception will all play a role in whether this trend truly takes off.

Google’s AI predicts the weather like a pro

Via Google DeepMind

Google’s new AI weather tool is breaking forecasting records, accurately predicting conditions up to 15 days in advance. Trained on 40 years of weather data, this AI identifies patterns that human meteorologists might miss, delivering predictions faster and more precisely.

How it works: Unlike traditional models, which often focus on smaller datasets, Google’s AI synthesizes decades of historical global data to determine the most likely weather outcomes.

What it can do:

  • Wind energy optimization: Predicting wind patterns helps turbine operators maximize efficiency by knowing when to turn turbines on or off.

  • Disaster preparedness: Early warnings for extreme weather events, like hurricanes or floods, allow for better planning and response.

Why it matters: With tech giants like Nvidia and Huawei also entering the AI weather space, this is shaping up to be a critical area for innovation. But despite its potential, even Google acknowledges that human meteorologists are still essential to interpret results and make the most accurate calls.

  • Axiado introduces a chip to stop cyberattacks

  • ChatGPT hits 300 million weekly users

  • Meta shifts focus to Llama AI while still relying on GPT-4 internally

  • Google makes Veo video generator available to Cloud users 

  • Apple may increase its use of Amazon’s AI chips for advanced models

Sora can run…but it can’t hide: Watch along as I break down the latest news in AI.

Filter through the AI noise: Join us as we do a side-by-side comparison of the latest and greatest AI tools for development. In this episode, we test Bolt vs V0 vs Replit vs Websim.AI. Check it out!

That’s a wrap for this week! See you next time.

—Matt (FutureTools.io)

P.S. This newsletter is 100% written by a human. Okay, maybe 96%.