2026 Talks
Beyond Next-Token Prediction: Joint Embeddings, World Models, and Why Natural Language Isn't Enough
Missing value detected...
Video will be populated after the conference
- Model Systems
Large language models can write poetry and debug code, but ask them to reason about physical systems at scale and they collapse. Why? Because natural language is fundamentally the wrong representation for how the world actually works. The future of AI isn't better language models: it's joint-embedding architectures that bridge natural language with formal domain-specific languages where physics, causality, and constraints live.
This talk introduces a framework for world models that operate across multiple representation spaces simultaneously. Instead of forcing everything through the bottleneck of natural language tokens, we learn joint embeddings that align:
- Natural language descriptions with formal specifications
- High-level goals with executable domain-specific languages (DSLs)
- Physical constraints with learned dynamics
- Human intent with machine-verifiable semantics
The key insight: scalability doesn't come from bigger transformers, it comes from the right representations. DSLs give us composability, verifiability, and orders-of-magnitude efficiency gains that natural language simply cannot provide. But humans think in natural language. The breakthrough is alignment mechanisms that map fluidly between NL and FL while preserving semantic structure.
I'll demonstrate how this enables world models that scale to complex physical systems, generalize beyond training distributions, and critically; fail gracefully with interpretable error modes. We'll see applications from healthcare, networking to manufacturing systems where traditional end-to-end learning hits fundamental walls.
The next generation of AI won't just chat: it will build, verify, and scale.
Founder
Sriram Vishwanath
Georgia Institute of Technology
Sriram Vishwanath received the B. Tech. degree in Electrical Engineering from the Indian Institute of Technology (IIT), Madras, India in 1998, the M.S. degree in Electrical Engineering from California Institute of Technology (Caltech, Pasadena USA in 1999, and the Ph.D. degree in Electrical Engineering from Stanford University, Stanford, CA USA in 2003. Currently, he is the Byers Chair in Electrical and Computer Engineering at Georgia Institute of Technology (Georgia Tech), and GRA Eminent Scholar. Prior to this, he was a Professor in the Chandra Department of Electrical and Computer Engineering at The University of Texas at Austin.
Sriram’s research is in the domains of artificial intelligence/machine learning (AI/ML), decentralized systems and information & coding theory. He has over 300 refereed research papers, and multiple research awards. He works across a diverse set of areas and specializes in bringing the gap between theory and practice. In particular, he has been involved in multiple startups in the security, networking, healthcare, AI/ML and crypto spaces.
Sriram received the NSF CAREER award in 2005 and the ARO Young Investigator Award in 2008. He was the 2014 UT faculty entrepreneur of the year and was honored in the Reuters list of highly cited researchers in 2014 and 2015. Sriram is a Fellow of the IEEE, a senior member of the National Academy of Inventors (NAI), and a Technical Fellow for Distributed Systems and Machine Learning at MITRE Labs.
The AI Conference for Humans Who Ship
While other conferences theorize, AI Council features the engineers shipping tomorrow's breakthroughs today.