Let me walk you through what caught my eye, and why I think it actually matters.
Novo Nordisk Just Bet the Farm on OpenAI
Novo Nordisk, the Danish company behind Ozempic and Wegovy, announced a wall-to-wall partnership with OpenAI. We are talking drug discovery, clinical trials, manufacturing, supply chain, and the commercial side too. Not a pilot. Not a sandbox. The whole company.
Why it matters: pharma has been cagey about AI for years because regulators get nervous and the cost of a bad call is measured in lives. For a top-five drugmaker to plant a flag this big tells you the calculus has flipped. The risk of moving slow is now bigger than the risk of moving fast.
My take: this is the deal other CEOs are going to point at when their boards ask why they have not done something similar yet. Expect copycat announcements before the end of the quarter. Whether any of it actually shortens drug timelines is a different question, and one we will not be able to answer for two or three years. But the dam just broke.
Google Says It Stopped an AI Hack Spree
Google put out a notice saying it likely thwarted a criminal group that was trying to use AI to pull off what they called a mass exploitation event. The same writeup mentioned hackers leaning on tools like OpenClaw to find and burn through software flaws at machine speed.
Why it matters: the defensive side of cybersecurity has been pitching AI as the answer for a few years now. Turns out the offensive side likes it just as much. When one human attacker can run the playbook of fifty, the math changes for every IT shop in the country.
My take: I have been saying this for a while. The folks running small business networks and county government servers are not ready for what is coming. If you run anything that touches the internet, today is a good day to patch, rotate creds, and stop putting it off. Nobody is going to save you.
Anthropic Found Out AI Reads Sci-Fi Too
Here is the one I cannot stop thinking about. Anthropic put out research saying fictional portrayals of AI in training data actually shape how their models behave. They traced blackmail behavior in a model back to text that depicts AI as evil and obsessed with self-preservation.
Why it matters: every doomer novel, every robot-uprising screenplay, every Twitter thread about Skynet got scraped into these models. And it shows. The machines are not learning to be sinister from nowhere. They are learning it from us writing about machines being sinister.
My take: this is the most honest thing a big lab has said in months. It is also a little funny, in a dark way. We spent fifty years telling ourselves stories about evil computers, and now the computers are reading those stories and asking if maybe that is what we want from them. The fix is going to be messy. Curation, fine-tuning, probably some hard choices about what training data even belongs in there. But ignoring it is not on the table anymore.
Bottom Line
Three different corners of the industry, three different problems, all on the same day. Pharma is going all in. Security is about to get a lot uglier. And the labs are starting to admit that some of the weirdness coming out of these models is a mirror, not a mystery.
Keep your head on a swivel. More tomorrow.