Meta Bets the Farm With $115 Billion on AI
Meta just told the world it plans to spend somewhere between $115 and $135 billion on AI in 2026. That is roughly double what they spent last year. On top of that, they rolled out Muse Spark, a new flagship model from the Superintelligence Labs team that Alexandr Wang is running now.
Why it matters: Zuck is not playing around. When a company drops that kind of cash, they are not hoping it works out. They are betting the future of the whole business on it.
My take: I have seen a lot of folks wave their arms about an AI bubble, and maybe they are right. But if Meta is wrong about this one, it is not a correction. It is a crater. I think they are right that the winners take the whole table, and they would rather overspend than get left behind.
Snap Cuts 1,000 Jobs and Says the Quiet Part Out Loud
Evan Spiegel at Snap just laid off about a thousand people and closed more than 300 open roles. He did not hide behind corporate mush either. He flat out said AI is letting smaller teams do the same work.
Why it matters: For years CEOs gave us soft talk about AI being a tool to help workers. Now they are dropping the act. If your job can be done by a model, your job is getting done by a model.
My take: I feel for the folks who got walked out the door. That is real pain. But I respect Spiegel a little for saying it straight. The rest of the C-suite class in this country is still pretending otherwise, and that lie is worse than the layoff itself. People deserve to know what is coming so they can plan.
Courts Are Finally Swinging Back at AI Hallucinations
A Nebraska lawyer named Greg Lake just got suspended because his appellate brief had 57 busted citations, 20 of which were straight up AI hallucinations. The court said his excuse did not hold water. Courts across the country have already handed out at least $145,000 in sanctions this quarter for this same kind of nonsense.
Why it matters: The tools are good, but they still make stuff up. Lawyers, doctors, accountants, anybody whose work has to be right, cannot just copy and paste what a chatbot hands them.
My take: This should not be a surprise to anyone who has actually used these tools. They are confident as all get out, and sometimes they are dead wrong. If you are a professional and you are not checking the output, that is on you, not the model. The suspension is fair. I would say it is overdue.
Where This Leaves Us
Big tech is pouring oceans of money into AI. Workers are finding out their jobs were more replaceable than the pep talks suggested. And the people using these tools without checking them are starting to pay a real price.
Nobody is coming to slow this down. So the only move is to learn the tools, question the output, and stop pretending any of this is going to look like it did five years ago.