"Frontend" used to mean browsers. Then it meant phones. Now it also means glasses, watches, VR headsets, and interfaces where users speak and share images instead of clicking buttons.
The fundamentals haven't changed. We're still building software that people interact with directly. But the surface area has expanded dramatically, and multi-modal AI has introduced interaction patterns we're only beginning to understand. We're not just designing for humans anymore; we're designing for AI agents that act on their behalf.
This track brings together practitioners working across these realities. You'll hear from engineers building local-first sync engines, developers building frameworks and tools that were once targeted at humans but are increasingly used by agents, and people creating experiences for devices that barely existed a few years ago. Expect honest accounts of what's working, what isn't, and the decisions that only make sense once you've deployed to real people.
If you're building for humans, or building for AIs that act on our behalf, whatever the screen, whatever the input, this track is for you.
From this track
How React Internals are Adapting to Fine-Grained Reactivity
Wednesday Mar 18 / 10:35AM GMT
Details coming soon.
Eduardo Bouças
Distinguished Software Engineer @Netlify
Computer Use Agents: The Frontier of Vision-Based Automation
Wednesday Mar 18 / 11:45AM GMT
Computer Use Agents represent a paradigm shift in software interaction: AI models trained to operate interfaces visually, mimicking human interaction rather than relying on technical APIs.
Stefan Dirnstorfer
CTO @Thetaris GmbH, Architect of ThetaML & Thetaris’ Testing Application, 42 Years into Software
Optimizing Performance with Smart Middleware and Edge Handlers
Wednesday Mar 18 / 01:35PM GMT
Details coming soon.
Julie Qiu
Uber Tech Lead, Google Cloud SDK @Google, Building Client Libraries and Command Line Tools Across Different Language Ecosystems
Architecting AI Driven Game Creation: From Research to Production Scale Systems
Wednesday Mar 18 / 02:45PM GMT
The next generation of games hasn’t been invented yet, but we can be pretty sure their creation will be heavily democratized by generative tools. The hard part isn’t just building smart models—it’s turning research-grade AI into production-ready, high‑fidelity creation tools.
Danielle An
Principal Engineer / GenAI Architect @Meta, Ph.D. with 15 years of professional experience in film, MR and gaming
The Frontend Architect’s Guide to Multi-modal UIs
Wednesday Mar 18 / 03:55PM GMT
Details coming soon.