Presentation: After Acceptance: Reasoning About System Outputs



4:10pm - 5:00pm

Day of week:



Key Takeaways

  • Understand practical real-world approaches to improving the testing of your systems
  • Learn how to test your system live in production
  • Analyze how to test your code against real data before it reaches production


Modern software development allows us to prove that new work is functionally complete. We write a set of executable specifications. We automatically execute them in the form of acceptance tests as part of our continuous delivery pipeline. When all the tests pass, we are done!

This approach is superior to what came before it but is by no means perfect. Testing frequently ends at the point of release, meaning that bugs in production can be caught late and by the end users. Pragmatism dictates that exhaustive acceptance testing is infeasible. Indeed, tests are likely to represent only a simplified version of user interactions. In production, data will almost certainly be generated by code paths that have not been fully exercised in acceptance tests. That data is usually decoupled from our acceptance testing environment. If the current version of the system generates durable data, how do we know that future versions will consider it valid and act upon it appropriately? How can we find out about bugs after acceptance, but before our customers do?

This session will walk through some techniques for bringing your testing to production. It will show you how to sanity check a live system using end to end testing, limiting interference with real user interactions and outputs. Finally it suggests ways to observe and integrate real production data into a continuous delivery pipeline and assert on the validity of the production system's outputs.


What is the focus of your work today?

Since I decided to go independent a couple of months ago, I’m focusing on two things: My Cycling app (, which has been my passion and hobby for the past few years. In addition I’m working in performance optimisation of a ratings prediction engine for the ITV TV channel.

What’s the motivation for your talk?

These days we’re pretty good at testing. Lots of development happens pretty reasonably: Test driven, with integration and acceptance tests. We can therefore release and release frequently, with confidence that our new features actually work. But there’s frequently a big blind spot in our testing: data.

A continuously delivered system continuously changes. Therefore data that’s been created and modified by successive releases of the system needs to continue to be valid and handled reasonably by future releases of the system. Standard testing approaches just do not do this - tests are usually either simplified user interactions, or very specific cases to guard against regressions. Given that exhaustive testing is impossible, testing with the real data generated from the real system opens a whole new set of opportunities for catching possibly catastrophic issues early.

This approach, that I’ve employed in few workplaces has been a revelation: Basically get your code in contact with production or production like data prior to release and run various invariant type tests against it. I’ve seen this approach catch a large number of issues, from migration failures, logic issues (exceptions), memory issues etc.

It’s your data, it’s there, use it to write tests!

How you you describe the persona of the target audience of this talk?

You’ve got to be pretty good at your testing in order to bring it to your production system with its production data. So I’d say Lead, Architect, Developer. It’s language agnostic.

How would you rate the level of this talk?

Medium. If you understand testing and continuous delivery, that is.

QCon targets advanced architects and sr development leads, what do you feel will be the actionable that type of persona will walk away from your talk with?

A concrete plan on how to easily test more using production data. I would like a ‘can do’ person to go to work the next day and think how they can bring the data in their CI environment and make it easy to write tests, so that they can catch the next major bug early, before the code hits production.

What do you feel is the most disruptive tech in IT right now?

I don’t think that a particular type of tech is what’s more disruptive right now. It’s more of a frame of mind. It started many years ago with the move away from enterprise java to more simple models and it’s continuing now. You don’t need a complicated framework that’s impossible to debug and ties you down to a vendor. You need a tightly knit architecture that you build to suit your purposes. It’s the move from expensive ‘enterprise’ architecture to simple solutions that you build to solve your problems.

Speaker: Dr. Stefanos Zachariadis

Senior Software Engineer

Stefanos loves to code and has done so professionally for over 12 years. His career has taken various twists and turns: From academia to writing satellite software for the European Space Agency; flight search software for a major airline; test automation for various banks and steam turbine design software, leading up to writing low latency code that can process 50,000 orders a second on a single CPU core as a team lead for LMAX Exchange, the pioneers of Continuous Delivery. Now an independent developer through motocode ltd and the coder behind CycleMaps, one of the leading cycling apps in the UK. His hobbies include more development, music, photography and backpacking.

Find Dr. Stefanos Zachariadis at

Similar Talks


Conference for Professional Software Developers