Abstract
The next generation of games hasn’t been invented yet, but we can be pretty sure their creation will be heavily democratized by generative tools. The hard part isn’t just building smart models—it’s turning research-grade AI into production-ready, high‑fidelity creation tools. That means rethinking the traditional game dev stack: static asset pipelines aren’t enough anymore, we have to design for dynamic inference and real-time state sync from the start.
This session looks at the systems design behind AI‑driven game creation. We’ll dig into the backend architecture for collaborative world‑building, with a focus on distributed GPU orchestration, semantic caching, and speculative execution. You’ll see how to build infrastructure that keeps generative tools responsive enough to maintain a creator’s flow state instead of breaking it.
Key Takeaways:
- Human-in-the-Loop Architecture: Designing stateful systems that treat AI as a collaborator, not just a request-response API.
- Solving for Latency: Using speculative inference loops to help make AI assistance feel interactive and fight the “blank page” problem.
- Production Hardening: Practical strategies for scaling non-deterministic AI outputs inside the deterministic constraints of modern game engines.
Speaker
Danielle An
Principal Engineer / GenAI Architect @Meta, Ph.D. with 15 years of professional experience in film, MR and gaming
Danielle is a Principal Engineer at Meta Horizon, where she spearheads Generative AI initiatives designed to redefine immersive world-building. A veteran of the film industry with a pedigree including Pixar and DreamWorks, Danielle has spent her career at the high-stakes intersection of artistry and technical innovation. From architecting the Instagram AR platform to scaling Meta Avatars, she has a proven track record of shipping products that resonate with billions of users globally.