Ethical AI Is an Engineering Problem

Abstract

The systems we design encode the values we choose

AI systems are increasingly embedded in critical products and decision-making processes. Yet many of the risks they introduce—bias, opacity, security or privacy vulnerabilities, and high computational cost—are often treated as policy or governance issues rather than engineering ones. History suggests otherwise. Every transformative technology—from electricity to aviation—eventually required new engineering practices and safety standards to make it safe and reliable at scale.

AI is going through the same transition. In this talk, we will look at real-world failures in AI systems and examine how issues such as discrimination, lack of explainability, and security risks emerge from technical design choices. We will then explore how ethical principles like fairness, transparency, security, or sustainability can be translated into design and engineering decisions across the AI lifecycle.

Ethical AI is not an abstract ideal—it is the result of the architectural and engineering decisions we make when building systems.

Interview:

What is your session about, and why is it important for senior software developers?

This session explores why ethical AI is not only a policy or governance issue, but also a core engineering challenge. Many of the risks associated with AI systems—bias, lack of transparency, security vulnerabilities, or excessive computational cost—often emerge from technical design choices made during development.

For senior software developers and architects, this means that ethical outcomes are not abstract ideals but properties of the systems they design. The talk examines real-world failures in AI systems and shows how architectural decisions, data pipelines, model evaluation, and system observability directly influence fairness, transparency, security, and sustainability.

Technology often evolves much faster than regulation, meaning engineers frequently operate in areas where clear rules or standards do not yet exist and new risks are still emerging. In this context, ethical principles such as fairness, transparency, security, and sustainability can serve as practical guides for designing and implementing new technological products responsibly.

The goal is to help engineers understand how ethical principles can be translated into concrete engineering practices across the AI lifecycle.

Why is it critical for software leaders to focus on this topic right now, as we head into 2026?

AI systems are rapidly moving from experimental tools into components of critical infrastructure embedded in products, platforms, and decision-making processes. As adoption accelerates, the consequences of poorly designed AI systems are becoming more visible—from discriminatory outcomes to security vulnerabilities and unsustainable computational costs.

We are at a stage similar to other major technological transitions in history, where new engineering practices must emerge to make systems safe and reliable at scale. Software leaders play a key role in shaping those practices. As we head into 2026, organizations that treat ethical AI as an engineering discipline—rather than an afterthought—will be better positioned to build trustworthy, resilient, and at the end of the day better AI systems.

What are the common challenges developers and architects face in this area?

One of the biggest challenges is that many ethical risks in AI systems are difficult to detect using traditional software engineering approaches. Bias may originate in training data, explainability can be limited by model architecture, and security vulnerabilities can arise from new attack vectors such as prompt injection or model extraction.

Another challenge is translating high-level principles—such as fairness or transparency—into concrete engineering practices. Teams often lack clear architectural patterns, evaluation metrics, or operational tools to implement these principles throughout the AI lifecycle. As a result, ethical considerations are frequently addressed too late, when systems are already in production.

What's one thing you hope attendees will implement immediately after your talk?

I hope attendees start treating ethical properties of AI systems the same way we treat reliability, performance, or security—as engineering requirements that must be designed, measured, and continuously monitored.

In practice, this means incorporating fairness evaluation, explainability checks, security testing, and resource efficiency considerations directly into the development lifecycle. By embedding these practices early in system design and architecture, teams can build AI systems that are not only powerful, but also trustworthy and responsible.

I also hope people leave with a stronger curiosity about the risks and limitations of AI: following the growing body of research in the field, questioning assumptions, and becoming more aware of the societal and technical implications of the systems they build.

What makes QCon stand out as a conference for senior software professionals?

QCon stands out because it focuses on real-world engineering challenges and lessons learned from building systems at scale. The conference brings together experienced practitioners who share practical insights rather than theoretical ideas, creating an environment where senior engineers and architects can learn directly from peers facing similar problems.

This emphasis on practitioner-driven knowledge makes QCon particularly valuable for professionals responsible for designing and operating complex systems, especially as new technologies like AI reshape the software landscape.


Speaker

Clara Higuera Cabañes

Responsible AI Program Lead @BBVA - Driving Innovation & Social Impact in AI, PhD Thesis Advisor, Member of OdiseIA, Previously Data Scientist @BBC

Clara Higuera Cabañes is a computer scientist  leader with over 15 years of experience across academia and industry, building and deploying AI systems in finance, media, and biomedical domains. She holds a PhD in Artificial Intelligence and has led data science and multidisciplinary teams delivering real-world AI products. Clara currently works at BBVA, where she drives the bank’s Responsible AI strategy—developing governance frameworks, tools, training, and applied research to embed ethical principles into AI systems at scale.

She has contributed to the EU’s first General Purpose AI Code of Practice and has published research in top-tier conferences such as NeurIPS and KDD. Her work focuses on translating AI ethics into practical engineering approaches, with particular expertise in algorithmic fairness, explainability, and trustworthy AI. A regular international speaker, Clara is passionate about helping organizations design AI systems that are both innovative and responsible.

Read more
Find Clara Higuera Cabañes at:

Date

Wednesday Mar 18 / 11:45AM GMT ( 50 minutes )

Location

Churchill (Ground Fl.)

Topics

Responsible AI Ethical AI Algorithmic Fairness

Slides

Slides are not available

Share

From the same track

Session AI

AI is an Amplifier: Scale High Performance, Not Dysfunction

Wednesday Mar 18 / 10:35AM GMT

AI adoption in software development is nearly universal, yet the outcomes for teams are highly variable. Why do some organizations see massive productivity gains while others see their delivery stability crash? DORA’s latest research provides a key insight: AI acts as an amplifier.

Speaker image - Nathen Harvey

Nathen Harvey

Lead of DORA and Product Manager @Google Cloud

Session AI

The Reinvention of the Dev Team

Wednesday Mar 18 / 01:35PM GMT

I don’t need to tell you that AI has changed software development forever. You know this. Whether you’re positive, negative or indifferent to this change, you can’t deny that the past 2 years have radically changed the role of the software developer.

Speaker image - Hannah Foxwell

Hannah Foxwell

Independent Consultant at the Intersection of Platform Engineering, Security, and AI, Founder of AI For the Rest of Us

Session

4 AI Native Developer Patterns

Wednesday Mar 18 / 03:55PM GMT

Development is experiencing a new phase of automation, similar to what we saw with DevOps. Numerous new tools are emerging, and it can be challenging to keep up with them.

Speaker image - Patrick Debois

Patrick Debois

AI Product Engineer @Tessl, Co-Author of the "DevOps Handbook", Content Curator at AI Native Developer Community

Session AI

Teaching Engineers, Trusting AI: How Education Enabled Autonomous Code Review

Wednesday Mar 18 / 02:45PM GMT

At Duolingo, we realized that successful AI adoption would require deliberate learning — not just access to tools. Over the past year, we scaled AI usage across 300+ engineers through intentional dogfooding programs, live training, office hours, and AI observability dashboards.

Speaker image - Sarah Deitke

Sarah Deitke

Software Engineer @Duolingo