We’re Being Too Loose With the Term “World Model”
I finally got through the 3 hr+ Max Bennett interview on MLST (link at the end). It took me over two weeks to finish it. But it sharpened something I was already thinking while reading Packy McCormick’s Not Boring essay on world models, co-written with Pim de Witte.
I think we are still too loose with the phrase “world model”.
Current LLMs obviously have models. You do not get that level of performance without some internal structure that captures a surprising amount about language and the world. But Bennett’s distinction is more demanding than that. A world model, in the stronger sense, is about interventions and causality: I think this will happen if I do X, I do X, and then I update from the gap between what I expected and what actually happened.
That is not the same thing as learning from a fixed corpus.
What also stayed with me is his point about language. Language was not just a better way to communicate observations. It let humans share simulations, refine them together, and build on them across generations. That feels like a deeper explanation for why human knowledge compounds the way it does.
Read through that lens, world models start to look less like a side branch of robotics and more like a serious attempt to move beyond systems that are very good at describing the world but cannot really test themselves against it.
I still think this area is easy to overstate, and the term gets used too casually. But I do think the direction matters.
Maybe the next step after LLMs is not just better text generation, but systems that can form hypotheses, act, and revise.
MLST interview: https://lnkd.in/eMaj-apq
NotBoring article on World Models: https://lnkd.in/eQH2KW_E
No spam, no sharing to third party. Only you and me.
Member discussion