Mila World Modeling Workshop: wrap-up
What if large language models had eyes, ears, and other sensors that let them perceive the real world? What kind of internal representation would they build? Could such systems become fully autonomous, and what scientific and technical breakthroughs would it take to get there?
These questions shaped the Workshop on World Modeling, co-hosted by Mila and Lambda, a gathering dedicated to advancing the next generation of intelligent systems. Explore the full workshop: https://world-model-mila.github.io/
Mila is a premier AI research institute founded in 1993 by Turing Award winner Yoshua Bengio. Its mission is to push the boundaries of machine learning through rigorous, foundational research and interdisciplinary collaboration.
Lambda supported the workshop through both funding and scientific contributions. The event brought together researchers from across AI and machine learning to tackle one of the field's most ambitious challenges: building systems that can model and understand the world. Keynote speakers included Yoshua Bengio, Yann LeCun, Juergen Schmidhuber, Shirley Ho, Sherry Yang, and Lambda's own Amir Zadeh.
Lambda's Amir Zadeh presenting at Mila World Modeling Workshop, 2026
Yoshua emphasized the importance of safety and autonomy, raising critical questions about alignment and control as AI systems become more capable and integrated into real-world settings.
Yann and Shirley focused on the technical challenges ahead—scalable architectures, representation learning, multimodal integration, and the computational foundations required to build robust world models.
Juergen and Amir addressed infrastructure and long-term strategy. Juergen reflected on lessons learned from decades of machine learning research and how those insights can guide the path toward general world models. Amir emphasized the importance of high-quality data, simulation environments, and the ability to solve real-world multimodal problems as essential stepping stones.
Sherry advocated for a polymathic world model, grounded in scientific reasoning. In this view, world models should not merely capture correlations, but integrate structured knowledge from disciplines such as physics and other sciences to build deeper, causal representations of reality.
The workshop accepted 50 papers, including 7 oral presentations. Contributions spanned video modeling, AI for science, multimodal learning, large language models, and JEPA-based approaches.
Among the highlights: Tal Daniel, a collaborator from Carnegie Mellon University, presented joint work with Lambda titled "World Modeling using Latent Particle Models." The paper was accepted by ICLR as an oral presentation, placing it in the top 1.18% of submissions.
Read the paper: https://taldatech.github.io/lpwm-web/
The Workshop on World Modeling underscored both the promise and complexity of building AI systems that can perceive, reason about, and act in the real world, and Lambda was glad to partner with Mila to advance this work and contribute to the foundations of what comes next.