Abstract
At Duolingo, we realized that successful AI adoption would require deliberate learning — not just access to tools. Over the past year, we scaled AI usage across 300+ engineers through intentional dogfooding programs, live training, office hours, and AI observability dashboards. The goal wasn’t simply increasing usage; it was building literacy, trust, and shared norms around how AI should be used in production engineering workflows.
That foundation enabled us to take on harder challenges — including introducing AI into code review itself. Our AI-powered PR Risk Score and auto-approval system now safely approves 10% of pull requests and cuts developer wait times in half. But the real work wasn’t model tuning — it was cultural. We had to design guardrails engineers respected, measure impact without incentivizing reckless automation, and ensure that automation strengthened rather than eroded review quality.
Autonomous review wasn’t the endpoint; it was proof that with the right educational groundwork, teams can safely expand AI into high-trust, high-stakes systems. This talk explores how building AI literacy unlocks the ability to redesign core engineering workflows — and what that means for organizations navigating the next phase of AI adoption.