Abstract
AI has distorted the signals we rely on to hire engineers. CVs are increasingly tailored, screening can be rehearsed, tech tests can look “perfect,” and even system design and behavioural answers can be polished in ways that don’t reflect real on-the-job judgement. The result: many teams feel less confident in hiring decisions, even as they add more process.
This talk reframes hiring as a measurement problem. We’ll walk the funnel end-to-end and use a simple set of questions at each stage: what are you trying to measure, how does AI change the signal, what does “good” AI use look like here, and what trade-offs are you making? I won’t claim that this is solved, but you’ll leave with a practical decision model and patterns you can adapt and apply to your constraints and culture.
Key takeaways
- Where signals break (and why “just ban it” usually isn’t the answer)
- A lightweight way to choose your stance on AI per stage
- Trade-offs: speed vs rigour, fairness vs control, remote vs in-person
Speaker
Reece Nunn
Software Engineering Manager @BBC
Reece is a Software Engineering Manager at the BBC, where he’s led platform and enablement teams - most recently through a major reorganisation. He began his career as a Software Engineer, building and running experiences and platform services spanning risk, identity, and account/authentication. Today he works with large-scale web and mobile platform capabilities, with a focus on reliability, developer experience, and evolving systems through change. He’s owned hiring processes and run assessment days for engineers at all levels, and is particularly interested in making hiring fair, practical, and predictive of real on-the-job judgement.
Reach out to him on LinkedIn