Conference:March 6-8, 2017
Workshops:March 9-10, 2017
Presentation: Continuous Performance Testing
Location:
- Churchill, G flr.
Duration
Day of week:
- Monday
Level:
- Intermediate
Persona:
- Developer
Key Takeaways
-
Distil 6 years of a performance testing experience into a repeatable model that you can adopt in your engineering organization.
-
Hear applied use cases of where adoption of performance testing strategies caught regressions prior to production deployments.
-
Understand the approach used at LMAX on adopting a systematic organization wide approach to performance testing.
Abstract
In our world of continuous delivery with repeatable builds and automated deployments, performance testing is often bolted on as an afterthought.
This can be an acceptable strategy until your application becomes popular, after which customers can start complaining about application response. As technologists, it is our responsibility to be able to reason about and improve the responsiveness of our systems.
Good performance testing is not only about catching regressions; it is a key enabler of measurement, investigation & optimisation. With integrated profiling and measurement we can determine how to drive efficient use of compute, network, and memory resources.
In this talk we will cover techniques for making performance testing a first-class citizen in your Continuous Delivery pipeline. Along the way, we will cover a number of war stories experienced by the team building one of the world's most advanced trading exchanges. Finally, we will explore how this approach fits in to the landscape of containerisation, virtual machines, and provider-managed resources.
Interview
Mark: I’m a senior performance engineer at Improbable.io. We enable the creation of reality-scale simulations on top of our distributed simulation engine, SpatialOS. My role here is to bring a performance-centric view of software development. We’re building out performance testing infrastructure, improving the performance of the overall system, and driving a culture that thinks about how our system performs at scale in order to help us meet our very ambitious goals.
Mark: This talk is focused on the culmination of things I learned towards the end of my time at LMAX (my last role before Improbable.io). I really want to talk about my views (and lessons I’ve discovered) working on performance testing over the last 6 years. For example, I’ve learned to use performance testing for a variety of things, including regression, experimentation/observation, and as part of continuous integration pipeline. I feel like I have some lessons worth sharing, and I think I have a really good idea of what a comprehensive performance testing strategy might look like across an engineering organization. This talk hopes to distill these things and share them with QCon.
Mark: One of the key points of the talk is that there are three or four distinct levels of performance testing you can do on a distributed system.
In much the same way that you might have functional tests, integration tests, or acceptance tests, we developed a performance testing model. For example, you might have relatively fast to run tests against small units of code (which cause the build to fail if we regress from a performance point of view), then step higher and have component level tests where one complete service is wired together. Finally, the model scales up to the point you have end-to-end performance testing.
The overarching theme of the talk is where to apply the different levels of testing and at each point I’ve tried to provide stories of how this type of testing helped us gain large performance improvements or caught a performance regression before production deployment. So the talk is what I think a mature performance testing strategy looks like for an engineering organization.
Mark: Tech leads, performance engineers, really generalist agile developers who are interested in fast feedback. I’d like to have software developers who not only want their software to be functional, but to run atypically fast. That’s who I’m really targeting.
Similar Talks
Tracks
-
Architecting for Failure
Building fault tolerate systems that are truly resilient
-
Architectures You've Always Wondered about
QCon classic track. You know the names. Hear their lessons and challenges.
-
Modern Distributed Architectures
Migrating, deploying, and realizing modern cloud architecture.
-
Fast & Furious: Ad Serving, Finance, & Performance
Learn some of the tips and technicals of high speed, low latency systems in Ad Serving and Finance
-
Java - Performance, Patterns and Predictions
Skills embracing the evolution of Java (multi-core, cloud, modularity) and reenforcing core platform fundamentals (performance, concurrency, ubiquity).
-
Performance Mythbusting
Performance myths that need busting and the tools & techniques to get there
-
Dark Code: The Legacy/Tech Debt Dilemma
How do you evolve your code and modernize your architecture when you're stuck with part legacy code and technical debt? Lessons from the trenches.
-
Modern Learning Systems
Real world use of the latest machine learning technologies in production environments
-
Practical Cryptography & Blockchains: Beyond the Hype
Looking past the hype of blockchain technologies, alternate title: Weaselfree Cryptography & Blockchain
-
Applied JavaScript - Atomic Applications and APIs
Angular, React, Electron, Node: The hottest trends and techniques in the JavaScript space
-
Containers - State Of The Art
What is the state of the art, what's next, & other interesting questions on containers.
-
Observability Done Right: Automating Insight & Software Telemetry
Tools, practices, and methods to know what your system is doing
-
Data Engineering : Where the Rubber meets the Road in Data Science
Science does not imply engineering. Engineering tools and techniques for Data Scientists
-
Modern CS in the Real World
Applied, practical, & real-world dive into industry adoption of modern CS ideas
-
Workhorse Languages, Not Called Java
Workhorse languages not called Java.
-
Security: Lessons Learned From Being Pwned
How Attackers Think. Penetration testing techniques, exploits, toolsets, and skills of software hackers
-
Engineering Culture @{{cool_company}}
Culture, Organization Structure, Modern Agile War Stories
-
Softskills: Essential Skills for Developers
Skills for the developer in the workplace