⚡️ 3 weeks away - buy your tickets now!

2026 Talks

Elie Bakouch
Elie Bakouch
Research Engineer | Prime Intellect

How Open Frontier Labs Actually Train Their Models

  • Model Systems

Training a large language model is an exercise in tradeoffs you didn't expect. Should you spend a week optimizing infrastructure and architecture, or just start training? This talk covers how to think about pre-training decisions: why architecture changes are rarely about accuracy and almost always about performance, how frontier open labs actually design models, and how to make principled calls when everything is a tradeoff. We'll walk through real examples of decisions that looked obvious in retrospect, and the ones that still don't have clean answers.

Elie Bakouch

Research Engineer

Elie Bakouch

Prime Intellect

Elie Bakouch is a Research Engineer at Prime Intellect, working to advance open pre-training and mid-training. Previously at Hugging Face, he created and trained the SmolLM series of efficient language models and contributed to numerous open research efforts including Open-R1, SmolVLM, and the open pre-training playbooks, comprehensive guides and recipes for training language models from scratch. His work focuses on making both novel and existing training techniques accessible and reproducible in the open.