- Future Tools
- Posts
- The new GPT competitor
The new GPT competitor
Plus: Apple revolutionizes image editing
Happy Friday! It finally happened. Shortly after rebranding the Bard chatbot to Gemini, Google just launched Gemini Ultra—its most powerful LLM yet. You can access it via the Gemini Advanced paid subscription ($20/month with a two-month free trial), similar to ChatGPT Plus. If you try it out, I’d love to hear your thoughts!
Hugging Face Takes on OpenAI
Hugging Face
There’s a new, free alternative to OpenAI’s GPT Builder and GPT Store—and it’s 100% open-source.
Enter: Hugging Face. Dubbed “the GitHub of machine learning,” Hugging Face is a collaboration platform and community for developers to share data on their AI models. Hugging Face has emerged as a leading player in open-source AI, solidified by a recent partnership with Google.
The new GPT alternative: Hugging Face just unveiled Hugging Chat Assistants. These assistants work just like OpenAI’s GPTs—they’re customizable AI chatbots with specific capabilities.
But there are some key differences:
Hugging Chat Assistants are completely free (while ChatGPT Plus subscription costs $20/month).
Users can choose from various open-source models (including Claude, LLaMA 2, and Mixtral) to power their Assistants.
Unlike GPTs, assistants don’t support web search and image generation.
Hugging Face also launched a central repository for Assistants, similar to the GPT Store. This allows users to share their own chatbots and browse those made by others.
Try it out: Just open HuggingChat (Hugging Face’s signature AI chatbot) and click “Create new assistant.” After giving your model a name and a description, type in some instructions for what you want it to do—et voilà.
From micro to macro: The launch of Hugging Chat Assistants is another big win for the open-source camp. By offering a free alternative to OpenAI’s GPTs, Hugging Face is making custom AI tools accessible to everyone.
Optimize Your YouTube Content in One Click
Are you making the most of your YouTube content?
Your long-form YouTube content is a goldmine of potential Shorts—and if you aren’t making Shorts, you’re missing out on potential views, new subscribers, and audience engagement.
That’s why over 10 million creators and businesses use TubeBuddy’s AI-powered Suggested Shorts feature. This tool can:
Analyze and identify the most engaging parts of your video.
Recommend up to eight clips tailored to your specific content and audience.
Get expert advice to enhance titles, descriptions, and tags.
Meta’s Quest to Label AI-Generated Images
Source: Meta
On Tuesday, Meta announced that it’s working to identify and label AI-generated images—and OpenAI quickly joined the mission.
Meta’s vision: When a user uploads an image to Facebook, Instagram, or Threads, Meta will scan its metadata—an electronic signature in the image’s code—to check if it was AI-generated. It will then label detected AI images with a watermark (like a sparkles emoji ✨).
The catch? Major AI image generators don’t yet include this metadata in their images—so Meta is first working to get industry partners (like Google, Microsoft, Adobe, and Midjourney) on board.
And OpenAI already stepped up. Hours after Meta’s announcement, the company announced its plans to start embedding metadata tags into images created by its DALL-E 3 model.
From micro to macro: Meta and OpenAI’s labeling initiatives are a response to a growing public awareness for AI deepfakes. Recent incidents—like AI robocalls from a fake President Biden and nasty deepfakes of Taylor Swift—show that bad actors are leveraging AI to attack both public figures and political discourse. Without robust AI regulation in place, it’s in the hands of tech companies to fight back.
Apple Just Revolutionized Image Editing
Editing images is now as easy as texting, thanks to Apple's groundbreaking new AI model, MGIE.
The details: MGIE (short for MLLM-Guided Image Editing) is an open-source model that can edit images based on simple natural language instructions.
How it works: MGIE leverages the capabilities of Multimodal Large Language Models (MLLMs) to interpret your textual instructions and apply them to images. This allows MGIE to perform a wide range of editing tasks—from resizing and adding filters to changing the color and style of specific objects.
MGIE is specialized in extracting concise instructions from your prompt input.
For example, when you type “make the sunset more intense,” MGIE will convert that into the instruction “increase the saturation of the setting sun by 25%.”
MGIE is available on GitHub, or you can try it out through a web demo on Hugging Face.
From micro to macro: MGIE is Apple's first big launch as an AI researcher and developer. It also underscores multimodal LLMs’ major potential in revolutionizing the way we create content.
Smaug-72B is the world’s best open-source LLM.
The EU’s AI Act passed the last big hurdle on its way to adoption.
Microsoft is pushing for AI to go mainstream with its new Copilot Super Bowl ad.
A leaked Google document gives new insights on Gemini Ultra’s future.
YouTube’s CEO has high hopes for AI in 2024.
More important AI news: Dive deeper into this week’s hottest AI news stories (because yes, there are even more) in my latest YouTube video:
Gemini Ultra: I put Google’s new model to the test right after it came out. This new video covers everything you need to know:
Cracking the code: AI just decoded 2,000 year-old scrolls in a major historical breakthrough. Check out the whole story here:
And there you have it! If you try Gemini Ultra, be sure to let me know what you think. Just reply to this email! Catch you back here next Wednesday. :)
—Matt (FutureTools.io)
P.S. This newsletter is 100% written by a human. Okay, maybe 96%.