2026 Talks
How Open Frontier Labs Actually Train Their Models
Missing value detected...
Video will be populated after the conference
- Model Systems
Training a large language model is an exercise in tradeoffs you didn't expect. Should you spend a week optimizing infrastructure and architecture, or just start training? This talk covers how to think about pre-training decisions: why architecture changes are rarely about accuracy and almost always about performance, how frontier open labs actually design models, and how to make principled calls when everything is a tradeoff. We'll walk through real examples of decisions that looked obvious in retrospect, and the ones that still don't have clean answers.
Research Engineer
Elie Bakouch
Prime Intellect
Elie Bakouch is a Research Engineer at Prime Intellect, working to advance open pre-training and mid-training. Previously at Hugging Face, he created and trained the SmolLM series of efficient language models and contributed to numerous open research efforts including Open-R1, SmolVLM, and the open pre-training playbooks, comprehensive guides and recipes for training language models from scratch. His work focuses on making both novel and existing training techniques accessible and reproducible in the open.
The AI Conference for Humans Who Ship
While other conferences theorize, AI Council features the engineers shipping tomorrow's breakthroughs today.