Presentation: Continuous Performance Testing



2:55pm - 3:45pm

Day of week:



Key Takeaways

  • Distil 6 years of a performance testing experience into a repeatable model that you can adopt in your engineering organization.

  • Hear applied use cases of where adoption of performance testing strategies caught regressions prior to production deployments.

  • Understand the approach used at LMAX on adopting a systematic organization wide approach to performance testing.


In our world of continuous delivery with repeatable builds and automated deployments, performance testing is often bolted on as an afterthought.

This can be an acceptable strategy until your application becomes popular, after which customers can start complaining about application response. As technologists, it is our responsibility to be able to reason about and improve the responsiveness of our systems.

Good performance testing is not only about catching regressions; it is a key enabler of measurement, investigation & optimisation. With integrated profiling and measurement we can determine how to drive efficient use of compute, network, and memory resources.

In this talk we will cover techniques for making performance testing a first-class citizen in your Continuous Delivery pipeline. Along the way, we will cover a number of war stories experienced by the team building one of the world's most advanced trading exchanges. Finally, we will explore how this approach fits in to the landscape of containerisation, virtual machines, and provider-managed resources.


QCon: What is the focus of your work today?

Mark: I’m a senior performance engineer at We enable the creation of reality-scale simulations on top of our distributed simulation engine, SpatialOS. My role here is to bring a performance-centric view of software development. We’re building out performance testing infrastructure, improving the performance of the overall system, and driving a culture that thinks about how our system performs at scale in order to help us meet our very ambitious goals.

QCon: What is the goal for your talk at QCon London:

Mark: This talk is focused on the culmination of things I learned towards the end of my time at LMAX (my last role before I really want to talk about my views (and lessons I’ve discovered) working on performance testing over the last 6 years. For example, I’ve learned to use performance testing for a variety of things, including regression, experimentation/observation, and as part of continuous integration pipeline. I feel like I have some lessons worth sharing, and I think I have a really good idea of what a comprehensive performance testing strategy might look like across an engineering organization. This talk hopes to distill these things and share them with QCon.

QCon: What does a performance testing strategy look like?

Mark: One of the key points of the talk is that there are three or four distinct levels of performance testing you can do on a distributed system.

In much the same way that you might have functional tests, integration tests, or acceptance tests, we developed a performance testing model. For example, you might have relatively fast to run tests against small units of code (which cause the build to fail if we regress from a performance point of view), then step higher and have component level tests where one complete service is wired together. Finally, the model scales up to the point you have end-to-end performance testing.

The overarching theme of the talk is where to apply the different levels of testing and at each point I’ve tried to provide stories of how this type of testing helped us gain large performance improvements or caught a performance regression before production deployment. So the talk is what I think a mature performance testing strategy looks like for an engineering organization.

QCon: Who is the target persona for this talk?

Mark: Tech leads, performance engineers, really generalist agile developers who are interested in fast feedback. I’d like to have software developers who not only want their software to be functional, but to run atypically fast. That’s who I’m really targeting.

Speaker: Mark Price

Senior Performance Engineer @Improbableio

Mark is a Senior Performance Engineer at working on optimising and scaling reality-scale simulations. Previously, Mark worked as Lead Performance Engineer at LMAX Exchange, where he helped to optimise the platform to become one of the world's fastest FX exchanges.

Find Mark Price at

Similar Talks

VP of High Frequency Engineering @Barclays
Software Engineer @containersoluti (Container Solutions)
CTO who understands the science around helping people do their best
Senior Software Engineer @IBM, Committer on Apache Aries
Gold Badges Java, JVM, Memory, & Performance @StackOverflow / Lead developer of the OpenHFT project


Conference for Professional Software Developers