Presentation: The quest for low-latency with concurrent Java

Location:

Duration

Duration: 
10:35am - 11:25am

Day of week:

Key Takeaways

  • Hear how Martin has applied the Scientific Method to algorithmic studies in concurrency, and learn some of the best ways to apply his lessons in your everyday work.
  • Learn algorithms directly from someone who has been in the trenches fighting (and winning against) concurrency on the JVM for ten years.
  • See first hand how subtle changes to underlying algorithms directly affect performance in live demos and code.

Abstract

Concurrent programming with locks is hard. Concurrent programming without locks can be really hard. Concurrent programming with relaxed memory ordering and predictable latency is said to be only for wizards. This talk focuses on a decade long quest to discover algorithms which provide very high throughput while keeping latency low and predictable. Additionally, these algorithms and data structures need to provide real-time telemetry on how a system is performing via non-blocking operations.

We will cover some fundamental theory of concurrency and then compare various approaches to the same problem so that we can measure the impact on latency. We'll also show how some of these algorithm implementations get more interesting given the new features in Java 8.

This talk is aimed at programmers interested in advanced concurrency who want to develop algorithms with predictable response times at all levels of throughput which push our modern CPUs to the limit.

Interview

Question: 
What does a day in the life of Martin Thompson look like?
Answer: 
Quite often I get into accounts with a performance lead but I end up doing a lot of work on not what you might expect. As you start addressing performance issues, you have to address other tasks. You have to get their build system working right, you have to clean up code, that sort of thing. So what we often do is wind up focusing on better engineering. The performance stuff is pretty interesting, but you can really fix most performance issues pretty quickly. As part of doing that, you have to clear a lot of other stuff out of the way. That’s actually where the bulk of the work is. After that, I often do training, consulting, and working on developing and refining components for people. So any of the above really.
Question: 
How does this talk (The quest for low-latency with concurrent Java) differ from some of the other talks that you’ve done on concurrency?
Answer: 
A lot of this talk is about 10 years of research, and the things that I’ve discovered over those years. For example, there is the fundamental approach of using locks which is what most people do. Then you get into the lock-free world (or lock-less programming to be precise). That can take you up a step, but it’s still limited. There are certain things like Little’s Law and Universal Scalability Law that you really can’t run away from. They are math. You can’t beat math.
So you have to look at what are the fundamental limiting factors in concurrent programming, and if you’re going to start taking steps forward with the full appreciation of some of the fundamental limiting factors, you can then start to discover new and interesting things.
A lot of the talk is about going on a journey from some of these early techniques through to some state of the art approaches. It’s all guided by experimentation, math, and fundamentals (rather than randomly trying stuff), though you do randomly try stuff at times :-).
Question: 
How do you respond to people who say just don’t use Java for these types of problems. There are maybe a better language to handle concurrency better.
Answer: 
I get involved in shootouts within organizations, especially in high frequency trading where speed really matters, and you will get the C++ people arguing with the Java people, arguing with the C people, arguing with whatever else. My argument for that is always really simple. I find fundamentally what matters more than the low-level language is the algorithms you apply.
Think about some of the different messaging systems out there, we built Aeron with Java and it is faster than any of the others, including those built in C. None of them are faster because of the language. It’s all about the algorithms. Those are solid foundations I’m referring to.
Question: 
Can you give me a concrete example of an algorithm you will discuss?
Answer: 
I go through some of Lamport’s work with queues, and how it influenced the likes of Fastflow. I talk about one of the fundamental elements at the center of those algorithms, that CAS Operation. CAS Operations have this spinning retry concept that you need to follow. So when two threads compete for something, one of them wins. The other one loses. The loser has to retry. I like to use the analogy of optimistic concurrency control. When people say they can’t do lock-free algorithms, I ask if they use SVN, CVS, or Git? If you do, you are using lock-free algorithms every day.
Think about that. You check out a view of the world. You make your changes, and you commit it back. If you get a conflict, you have to resolve the conflict and try again. That is fundamentally what a lock-free algorithms are in a nutshell. At the core, it is realizing you have a conflict and trying again (which you typically do with a CAS Operation on a retry to success cycle).
If you can get to a place where you don’t have the retry cycle, then you can take your algorithms to a new level. One of the ways we can do that now is with the new instructions that are available in Java 8. They’ve been in x86 for while, but now they are available to us in 8. These instructions are LOCK XCHG and LOCK XADD. They can contribute to avoiding the spinning CAS loops.
In the talk, I show these algorithms perform under pressure and quantify the step forward in performance. The performance boast happens from subtle differences in the algorithms used.
Question: 
What should people know before coming to your talk? Do you recommend reading or being aware of something before coming? I mean to really grok your talk?
Answer: 
I actually like people not to come in with too many assumptions. I think it's one of the problems with our industry. There is far too much folklore and too much assumption. To quote Feynman "We all need a scientific curiosity." I think if there is one thing people should do is come with an open mind. I think that for conferences attendees should always come with an open mind to any talk and see what they can learn. You may not agree with stuff but hopefully you learn from all talks.

Tracks

Covering innovative topics

Monday, 7 March

Tuesday, 8 March

Wednesday, 9 March

Conference for Professional Software Developers