Abstract
Running AI today often means choosing between Anthropic or OpenAI, and accepting the cost and privacy concerns that come with shipping data to third-parties. What if there was another way? Tools like Transformers.js and WebLLM have progressed and it’s now possible to do useful inference locally - keeping sensitive data off the network entirely, cutting token costs, and building things that work without a server in the loop.
If you've ever called an LLM via API and wondered whether you actually needed to, this talk is for you.
James Hall, creator of jsPDF, shares the lessons from building browser-native AI - the demos that worked, the early Chrome APIs that didn’t.
In this talk you'll learn:
- When local inference beats a cloud API call, and how to benchmark the difference on your actual workloads
- How to prevent PII from ever leaving a machine
- The pitfalls that will catch you out and how to design around them
- How to leverage WebGPU as an in-browser data scientist
Speaker
James Hall
Founder and Director @Parallax, Author of jsPDF
James Hall is Tech Director and Founder at Parallax, where he leads on AI and software consulting.
Parallax created a mobile web browser for a startup that used LLMs before the launch of ChatGPT and has since produced many production AI systems for large enterprises and startups alike.
James is also the creator of jsPDF, one of the more widely used open-source JavaScript libraries, with 40 million monthly downloads.