Summary
Disclaimer: This summary has been generated by AI. It is experimental, and feedback is welcomed. Please reach out to info@qconlondon.com with any comments or concerns.
The presentation discusses the evolving relationship between engineering and data teams, especially in the context of AI advancements.
Main Points Covered:
- The traditional boundaries between engineering and data teams are dissolving, with each team taking on responsibilities traditionally held by the other.
- AI has exacerbated the integration challenges due to its impact on real-time decision-making systems.
- Shared ownership of data quality is emphasized, moving beyond simply improving tooling.
- Practical patterns such as data contracts and schema registries, along with observability practices, are crucial to managing data effectively in AI-driven environments.
- The importance of reliability and operational excellence in data systems is highlighted to match the rigour found in software engineering disciplines.
Real-World Examples:
- Incidents from past experiences, such as a churn prediction model causing operational disruptions, demonstrate the challenges faced due to data mismanagement and the blurring of responsibilities between teams.
- The introduction of data contracts at Pleo, which serve to define boundaries and responsibilities clearly between teams, and how these contracts should be integrated into the code base to prevent bad data from reaching production.
Actionable Recommendations:
- Implement mental models and real-world tooling patterns to bridge the gap between engineering and data teams.
- Adopt practices such as T-shaped skills development for engineers to increase their effectiveness in AI environments.
This is the end of the AI-generated content.
Abstract
Every senior engineer knows the feeling: a model makes a bad decision, a customer complains, and suddenly you're debugging a system that spans three teams, two pipelines, and a machine learning model nobody fully owns. Where do you even start?
The boundary between engineering and data has been dissolving for years and AI is making it collapse. Data engineers write infrastructure code. Backend engineers serve ML predictions. Analysts ship production logic. The old world of "engineering builds apps, data builds dashboards" is gone, and what's replaced it is messier, more interesting and full of opportunity for engineers willing to look beyond their own layer of the stack.
In this talk, I'll share real stories from building data and engineering systems - from a broken billing system that nearly cost us our biggest customers, to a churn prediction model gone haywire because of the smallest change. These are around real incidents, fixes and hard-won lessons that changed how teams worked together.
You'll walk away with practical mental models, real tooling patterns, and practical next steps you can take to bridge the gap between Data and Engineering.
You'll learn:
- Why shared ownership of data quality matters more than better tooling and how to actually build it
- Practical patterns that work today: data contracts and schema registries, observability patterns applied to data and how to deal with the messy reality of production data
- What "T-shaped" really means for senior engineers in the AI era - the specific skills and knowledge that give you leverage when it comes to dealing with data systems
Interview:
Who is your talk for?
Senior engineers, staff+ ICs, and engineering leaders who work with (or alongside) data systems, ML models, or AI-powered features - and want to stop treating them as someone else's problem.
Speaker
Lada Indra
Head of Data Platform @Pleo, Previously Head of Data @Legend and Director API Platform BI & Data @Vonage
Lada Indra is the Head of Data Platform @Pleo, where he's responsible for the foundational infrastructure powering all data processing. With over a decade of experience building engineering systems and high-performing teams, Lada previously served as Head of Data @Legend, overseeing web analytic, commercial data and analytics tooling. Prior to that, he was Data Director for the API platform @Vonage, where he architected scalable systems to process large-scale communications API event data.