- AI: Beyond the Buzz
- Posts
- π Nvidia GPUs to AI Model Maker
π Nvidia GPUs to AI Model Maker
Meta Dumps Open Source AI

Hey folks,
Google just turned your headphones into a universal translator. The new Gemini 2.5 Flash Native Audio feature in Google Translate turns any pair of earbuds into a universal translator of 70+ languages, preserving the speaker's actual tone and cadence. Speak English, your phone broadcasts Hindi, you hear the response translated back. Previously this was a Pixel Buds exclusive; now it works with whatever headphones you've got lying around. Global communication just got a whole lot easier. See it in action here.
Let's dive in..

π₯ Meta Is Abandoning Open Source AI

π The Buzz: Remember when Mark Zuckerberg said "screw that" to closed AI platforms and wrote a manifesto titled "Open Source AI is the Path Forward"? That was last year. Now Meta is reportedly pivoting to a proprietary model codenamed Avocado after Llama 4's internal failure, and the company is in chaos. Chris Cox is out of the AI division, Yann LeCun has left the company, and a 28-year-old from Scale AI is now running the show.
What happened to Llama:
Llama 4 "Behemoth" was shelved after poor benchmark performance.
On LiveCodeBench, Llama scored 40% vs GPT-5 at 85% and Grok 4 at 83%
Leadership panicked after DeepSeek's R1 model successfully copied Llama's architecture
The commercial risks of releasing open weights suddenly became real
The Avocado pivot:
Proprietary closed model targeting Q1 2026 release
Outside developers won't be able to download the models like Llama
Being built inside "TBD Lab," a smaller elite unit headed by new AI Chief
But here's the irony: Bloomberg reports that the team is training Avocado using external models including Google's Gemma, OpenAI's gpt-oss, and Alibaba's Qwen. Meta is using Chinese AI technology to build a closed model, while Zuckerberg previously warned about Chinese censorship in open-source models. The contradictions are piling up.
The Llama brand's future is unclear. It might continue as a "lite" offering or get deprecated entirely. For developers who built on Llama's open weights, this is a rug pull in slow motion. The company that championed open source as "the path forward" is now closing the door behind it.
π‘ Takeaway: If your AI strategy depends on Meta's Llama models, start planning alternatives now. DeepSeek, Qwen, and Mistral are filling the gap Meta is leaving. The irony: Meta's decision to close up was partly driven by DeepSeek copying Llama's architecture, so competitors have already extracted the value from Meta's open-source work. The open-source community will survive this; Meta's credibility in that community probably won't.

Together with Masterworks
Last Time the Market Was This Expensive, Investors Waited 14 Years to Break Even
In 1999, the S&P 500 peaked. Then it took 14 years to gradually recover by 2013.
Today? Goldman Sachs sounds crazy forecasting 3% returns for 2024 to 2034.
But weβre currently seeing the highest price for the S&P 500 compared to earnings since the dot-com boom.
So, maybe thatβs why theyβre not alone; Vanguard projects about 5%.
In fact, now just about everything seems priced near all time highs. Equities, gold, crypto, etc.
But billionaires have long diversified a slice of their portfolios with one asset class that is poised to rebound.
Itβs post war and contemporary art.
Sounds crazy, but over 70,000 investors have followed suit since 2019βwith Masterworks.
You can invest in shares of artworks featuring Banksy, Basquiat, Picasso, and more.
24 exits later, results speak for themselves: net annualized returns like 14.6%, 17.6%, and 17.8%.*
My subscribers can skip the waitlist.
*Investing involves risk. Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd.

π Nvidia Just Became a Major AI Model Maker

π The Buzz: Remember when Nvidia was just a GPU company? Well, Nvidia launched Nemotron 3 yesterday, a family of open-weight AI models designed for agentic AI. And here's the kicker: in 2025, Nvidia became the top contributor of open models and datasets on Hugging Face, with roughly 650 models and 250 datasets. They're not just selling picks and shovels anymore; they're mining gold.
The Nemotron 3 lineup:
Nano (30B params, 3B active): Ships now, 4x faster throughput than Nemotron 2
Super (~100B params, 10B active): Multi-agent reasoning, coming H1 2026
Ultra (~500B params, 50B active): Advanced reasoning engine. Due 2026
What makes it different:
Native 1M-token context window.
Up to 60% reduction in reasoning token generation vs previous models
4-bit NVFP4 training format optimized for Blackwell GPUs
3 trillion tokens of training data released openly
The early adopter list reads like a who's who: Perplexity, Cursor, CrowdStrike, Palantir, ServiceNow, Oracle, Zoom, Siemens, Deloitte, and more. Perplexity CEO called out Nemotron 3 Ultra specifically for their agent router system.
Why this matters strategically: While Meta abandons open source and OpenAI stays closed, Nvidia is flooding the ecosystem with free, high-quality building blocks. Every developer who builds on Nemotron is more likely to need Nvidia hardware to run it. Jensen Huang is playing the long game. Own the ecosystem, not just the chips.
π‘ Takeaway: Nvidia releasing open models is strategy. But for developers, the motivation doesn't matter. You're getting state-of-the-art open-weight models with 1M context windows, optimized for exactly the agentic AI workflows everyone's building. If you're evaluating models for production agents, Nemotron 3 belongs on your shortlist.


π AI buzz bits
π Runway launched GWM-1, its first "world model" that generates interactive 3D environments frame-by-frame in real time. Three variants ship: GWM-Worlds for explorable scenes, GWM-Robotics for robot simulation, and GWM-Avatars for photorealistic talking heads.
π¨ Adobe brought Photoshop, Express, and Acrobat directly into ChatGPT, free for all 800 million weekly users. Type "Adobe Photoshop, blur my background" and it just works, with sliders for fine-tuning.
π€ Grok botched the Bondi Beach shooting coverage badly. When a 43-year-old disarmed a gunman, Grok misidentified him and questioned the authenticity of video evidence, and injected irrelevant geopolitical context. The AI later admitted the error came from "viral posts" which is exactly the problem with training on X's real-time firehose.

π₯ AI productivity tools
For a full list of 1000+ AI tools, visit our Supertool Directory

π€ Santa Claude

Prompt Your Wishes

![]() | I hope you enjoyed the AI buzz today. ποΈ We need your feedback below to make better content. ποΈ Refer our newsletter to your friends and help us grow. Cheers, Tim |

What did you think of todays newsletter?This helps me make things better. |

