Architecting AI Driven Game Creation: From Research to Production Scale Systems

Abstract

The next generation of games hasn’t been invented yet, but we can be pretty sure their creation will be heavily democratized by generative tools. The hard part isn’t just building smart models—it’s turning research-grade AI into production-ready, high‑fidelity creation tools. That means rethinking the traditional game dev stack: static asset pipelines aren’t enough anymore, we have to design for dynamic inference and real-time state sync from the start.

This session looks at the systems design behind AI‑driven game creation. We’ll dig into the backend architecture for collaborative world‑building, with a focus on distributed GPU orchestration, semantic caching, and speculative execution. You’ll see how to build infrastructure that keeps generative tools responsive enough to maintain a creator’s flow state instead of breaking it.

Key Takeaways:

  • Human-in-the-Loop Architecture: Designing stateful systems that treat AI as a collaborator, not just a request-response API.
  • Solving for Latency: Using speculative inference loops to help make AI assistance feel interactive and fight the “blank page” problem.
  • Production Hardening: Practical strategies for scaling non-deterministic AI outputs inside the deterministic constraints of modern game engines.

Speaker

Danielle An

Principal Engineer / GenAI Architect @Meta, Ph.D. with 15 years of professional experience in film, MR and gaming

Danielle is a Principal Engineer at Meta Horizon, where she spearheads Generative AI initiatives designed to redefine immersive world-building. A veteran of the film industry with a pedigree including Pixar and DreamWorks, Danielle has spent her career at the high-stakes intersection of artistry and technical innovation. From architecting the Instagram AR platform to scaling Meta Avatars, she has a proven track record of shipping products that resonate with billions of users globally.

Read more

From the same track

Session

Computer Use Agents: The Frontier of Vision-Based Automation

Wednesday Mar 18 / 11:45AM GMT

Computer Use Agents represent a paradigm shift in software interaction: AI models trained to operate interfaces visually, mimicking human interaction rather than relying on technical APIs.

Speaker image - Stefan Dirnstorfer

Stefan Dirnstorfer

CTO @Thetaris GmbH, Architect of ThetaML & Thetaris’ Testing Application, 42 Years into Software

Session

Optimizing Performance with Smart Middleware and Edge Handlers

Wednesday Mar 18 / 01:35PM GMT

Details coming soon.

Speaker image - Julie Qiu

Julie Qiu

Uber Tech Lead, Google Cloud SDK @Google, Building Client Libraries and Command Line Tools Across Different Language Ecosystems

Session

How React Internals are Adapting to Fine-Grained Reactivity

Wednesday Mar 18 / 10:35AM GMT

Details coming soon.

Speaker image - Eduardo Bouças

Eduardo Bouças

Distinguished Software Engineer @Netlify

Session

The Frontend Architect’s Guide to Multi-modal UIs

Wednesday Mar 18 / 03:55PM GMT

Details coming soon.