Every Frontier Model Just Got a Reality Check
ARC-AGI-3 dropped this week and the results are brutal. Gemini 3.1 Pro led the pack at 0.37%. GPT-5.4 scored 0.26%. Opus 4.6 got 0.25%. Grok-4.20 scored a flat zero.
Humans scored 100%. On their first try. The gap between us and the best AI is 99.63 percentage points wide.
This is the first interactive AI benchmark. Instead of static pattern matching, models have to infer goals, explore, remember, and plan with no instructions. The best-performing system was not even an LLM. StochasticGoose from Tufa Labs hit 12.58% using reinforcement learning on a CNN. The big language models got crushed.
Arm Made a Chip After 35 Years of Not Making Chips
Arm shipped the AGI CPU. A 136-core, 3nm data center processor built specifically for AI inference. Meta is the launch customer. OpenAI, Cerebras, and Cloudflare are signed up.
Arm has spent its entire existence licensing designs to other companies. Making their own chip is a massive strategic shift. If it performs as promised on AI inference workloads, it could reshape the data center market. That affects cloud pricing for every developer.
Google Goes Live Everywhere
Google launched Gemini 3.1 Flash Live and Search Live in over 200 countries. They also added chat history import from rival AI apps. That last bit is smart. Lower the switching cost and people actually switch.
For builders, Gemini 3.1 Flash Live in developer preview means real agent applications could start showing up within a month. If you're building tools for global users, the 200-country rollout matters.
Connecting the Dots
ARC-AGI-3 is a reminder that raw capability is not intelligence. The best models fail at things toddlers handle. Arm making its own chip signals that AI inference demand is big enough to change a 35-year business model. And Google flooding 200+ countries with AI means the user base is about to get much bigger. Build for that scale.