Nvidia redefines AI infrasturcture

Plus: The future of 3D AI video

Happy Monday! Welcome to Friday’s edition of Future Tools…just a few days late due to some technical difficulties (I’m only human…not AI 😉 ). Luckily, we’re back up and running to get you the latest news from a wild week of AI.

Last week’s Nvidia’s GTC ‘24 conference (dubbed the “Woodstock festival of AI”) drew over 300,000 attendees—a testament to the massive attention that comes with producing the most desired hardware in the world. And Nvidia’s announcements at the event didn’t disappoint…more on that later.

Oh, and a quick PSA: Pika Labs’ new Sound Effects feature is now available to all users. Give it a try and let me know what you think. :)

Nvidia Shapes the Future of AI Infrastructure

Nvidia’s GTC conference had one theme: The company is building the next generation of AI infrastructure. And it starts with a major upgrade in chip architecture.

Introducing Blackwell GPUs. Blackwell is NVIDIA's brand-new chipset architecture designed to be the engine for the next great leap in AI. It consists of the Blackwell B200 chip—Nvidia’s new most powerful single-chip GPU—and the Blackwell GB200, which combines two B200 chips and a Grace CPU into one mega 208 billion transistor "Superchip.”

What’s the buzz? According to Nvidia, the GB200 will be the world’s most powerful chip.

  • Compared to Nvidia’s Hopper chips, Blackwell can train LLMs with 2.5x the efficiency and speed and 30x performance increase for LLM inference.

  • It also executes AI tasks and analyzes data at speeds previously unattainable, all while requiring up to 25x less cost and energy than its predecessor.

And the big guys are already on board: Multiple tech titans already committed to adopting the Blackwell chips when they become available later this year—including Tesla, Amazon, Google, and OpenAI.

Why it matters: Remember when GPUs revolutionized gaming and computer graphics? Blackwell represents that level of potential upheaval, but for the AI revolution. With this new hardware, AI companies can finally realize their most ambitious visions: Think hyper-realistic virtual worlds, curing diseases through rapid digital drug testing, and even quantum computing—all at lower costs and energy consumption.

Stability AI Creates 3D Videos From a Single Image

Stability AI

Stability AI just pushed the boundaries of AI-powered 3D modeling. 

Enter: Stable Video 3D (SV3D), Stability AI's breakthrough technology that can generate photorealistic 3D video renderings from simple inputs like a single photograph or text prompt.

How it works: SV3D combines two powerful AI model capabilities:

  1. Generating consistent video clips from just a single image input

  2. Calculating camera movements to capture a 3D object from any angle

What makes SV3D special? A feature called novel view synthesis (NVS). It allows SV3D to generate coherent views from any angle. This creates 3D models that are not just detailed but dynamically interactive—a serious upgrade from previous 3D generation tools which only offered limited perspectives.

  • To use it, simply provide a text prompt describing the 3D scene you want or upload a reference image.

  • SV3D will generate a photorealistic 3D rendering you can orbit around using adjustable camera controls.

SV3D is available f​​or commercial purposes with a Stability AI Membership. For non-commercial use, you can download the model weights on Hugging Face.

Side note: Something sticky is going on at Stability. This launch comes at an interesting time for the company: On Friday, Stability AI’s CEO Emad Mostaque resigned to "pursue decentralized AI." The leadership change comes amidst key departures and legal challenges faced by the AI company behind SV3D.

Why it matters: SV3D represents a major breakthrough in generative AI for 3D content creation. This innovative technology has the potential to significantly accelerate and democratize 3D content production across industries like gaming, film, AR/VR, and more...so long as Stability AI’s leadership changes don’t hinder its growth.

Get Deep Market Research—Fast

Up-to-date customer insights make a difference. These insights can contribute to a 175% increase in revenue, higher-quality leads, and 72% faster lead conversion. But market research takes time—and money.

That’s where Osum comes in. Just copy and paste a URL into Osum to instantly access detailed industry insights, SWOT analysis, buyer personas, sales prospect profiles, growth opportunities, and more for any product or business. Osum saves you weeks and thousands of dollars with the click of a button.

Stay ahead of your competition and discover new ways to unlock 10X growth:

P.S. Future Tools readers get 25% off on any plan for life. Use the code FUTURETOOLS at checkout.

Is Apple Partnering With Google?

Two of the world’s largest tech companies might be teaming up to put AI in your pocket: Apple and Google are in talks to integrate Gemini into the iPhone. 

What’s happening: Bloomberg reported that Apple and Google are in negotiations about rolling out Gemini-powered AI capabilities to iPhones.

  • If a Gemini licensing deal goes through, new AI capabilities could be a big part of the upcoming iOS 18 upgrade (set to be announced at Apple's Worldwide Developers Conference in June).

  • Possible AI iPhone features could include generative wallpapers, text generation, photo editing, and Siri upgrades.

Behind the scenes: Apple has been busy developing its own AI models, including an LLM codenamed Ajax and its AppleGPT chatbot. This begs the question: Why is Apple leaning on one of its biggest AI competitors to bring AI to its hardware? Some believe a Google partnership is a fallback solution as Apple’s own AI tech struggles to catch up. 

Why it matters: Google already pays Apple billions of dollars to make it the default search engine on Apple devices—a partnership that’s seen its fair share of regulatory scrutiny. If this deal goes through, Apple’s 2 billion devices (in active use worldwide) could become home to Gemini…and antitrust experts are already anticipating major pushback.

  • Open Interpreter just dropped the 01 Light, an open-source AI device that allows voice-controlled interaction with your home computer.

  • Microsoft’s new Surface for Business devices bring AI to hardware.

  • New leaks from internal sources at OpenAI revealed details for GPT-5’s capabilities and launch timeline.

  • YouTubers will be required to label realistic videos generated by AI. 

  • Google researchers developed VLOGGER, a new AI system that can bring still photos to life.

  • Elon followed through on his promise to open-source Grok.

  • Nvidia unveiled GR00T, a multimodal foundation model for humanoid robots.

More important AI news: Dive deeper into this week’s hottest AI news stories (because yes, there are even more) in my latest YouTube video:

Underground AI: Here are 7 awesome AI tools I’m sure you didn’t know about (but should):

The interview: Sam Altman talks the OpenAI board saga, Elon Musk, and more in this new podcast:

And there you have it! Don’t forget to try out Pika Labs’ new Sound Effects feature—I’m really curious to hear your thoughts on whether the soundtrack generation lives up to the hype. Have a great weekend!—Matt (FutureTools.io)

P.S. This newsletter is 100% written by a human. Okay, maybe 96%.