The Right 300 Tokens Beat 100k Noisy Ones: The Architecture of Context Engineering

Abstract

Your agent has 100k tokens of context. It still forgets what you told it two messages ago.

Prompt engineering taught us to craft the perfect instruction. Context engineering asks a different question: what does your model need to see and what should it never see at all? It's the shift from writing prompts to designing context.

In this talk, we'll dissect four antipatterns killing your AI agents and the architectural fixes that actually work:

  • The Stuffed Prompt : You crammed everything upfront and hoped for the best. But static context doesn't scale. We'll explore dynamic loading and context refinement : fetching what's needed when it's needed, and staying within your context window without losing signal. (And yes, we'll bust the myth that position doesn't matter—models do lose track of what's buried in the middle.)
  • The Wrong Tool for the Job : You picked one retrieval method and used it everywhere. But RAG isn't always the answer. Neither are tools. Neither is an exact match. We'll break down when embeddings help, when MCP gives you precision, and when a simple lookup beats both.
  • The Goldfish Agent : Your AI agent forgets everything between sessions. Or worse, remembers everything forever. We'll explore short-term and long-term memory, pruning and compaction strategies : what to persist, what to summarize, where to store it, and when to let go.
  • The Vibes Eval : You shipped because it "felt right." But you can't improve what you don't measure. We'll build eval strategies that prove your context choices work or expose the tokens you're wasting.

Your context window called. It wants its tokens back!

Bonus: We'll use a coding agent to explain these patterns so you'll learn how they work under the hood ; but everything also applies to AI agents in general.


Speaker

Patrick Debois

AI Product Engineer @Tessl, Co-Author of the "DevOps Handbook", Content Curator at AI Native Developer Community

Patrick is a true pioneer—credited with coining the term DevOps, co-authoring the DevOps Handbook, and launching the very first DevOpsDays back in 2009. Since then, he’s been shaping the tech industry with his unmatched ability to bring development, operations, and now GenAI together in truly transformative ways. He now works at Tessl where he shapes up Product Devrel and the AI Native Dev community.

Here’s why you don’t want to miss this:

🔹 Independent consultant at the cutting edge of GenAI and DevOps

🔹 Fractional CTO, advisor, and workshop leader

🔹 Former VP of Engineering, Distinguished Engineer, CTO

🔹 Guided teams at industry giants like Atlassian and Snyk

🔹 Active speaker, community builder, and lifelong learner

Patrick is all about raising the bar—helping companies embed engineering rigor into their GenAI journeys, inspiring teams to embrace AI for automation, and advising GenAI platforms on how to truly deliver value. He bridges the gap between management and engineering with a rare mix of deep tech expertise and people-first thinking.

Read more
Find Patrick Debois at:

Date

Wednesday Mar 18 / 10:35AM GMT ( 50 minutes )

Location

Fleming (3rd Fl.)

Topics

agentic coding DevOps technical leadership

Share

From the same track

Session

Explicit Semantics for AI Applications: Ontologies in Practice

Wednesday Mar 18 / 11:45AM GMT

Modern AI applications struggle not because of a lack of models, but because meaning is implicit, fragmented, and brittle. In this talk, we’ll explore how making semantics explicit (using ontologies and knowledge graphs) changes how we design, build, and operate AI systems.

Speaker image - Jesus Barrasa

Jesus Barrasa

Field CTO for AI @Neo4j

Session

Building an AI Ready Global Scale Data Platform

Wednesday Mar 18 / 01:35PM GMT

As organizations move from single-cloud setups to hybrid and multi-cloud strategies, they are under pressure to build data platforms that are both globally available and AI-ready.

Speaker image - George Peter Hantzaras

George Peter Hantzaras

Engineering Director, Core Platforms @MongoDB, Open Source Ambassador, Published Author

Session

Your Agent Sandbox Doesn't Know My Authz Model: A Standard-Shaped Hole

Wednesday Mar 18 / 02:45PM GMT

Sandboxes are the first line of defence for agentic systems: restrict the bash commands, filter the URLs, lock down the filesystem. But sandboxes operate on the syntax of requests, not the semantics of your authorization model.

Speaker image - Paul Carleton

Paul Carleton

Member of Technical Staff @Anthropic, Core Maintainer of MCP

Session

Beyond Benchmarks: How Evaluations Ensure Safety at Scale in LLM Applications

Wednesday Mar 18 / 03:55PM GMT

As LLM systems move from prototypes to production, the gap between benchmark performance and real-world reliability becomes impossible to ignore. Models that score well on benchmarks can still fail unpredictably when facing the complexity, ambiguity, and edge cases of real users.

Speaker image - Clara Matos

Clara Matos

Director of Applied AI @Sword Health, Focused on Building and Scaling Machine Learning Systems