Automatically Retrofitting JIT Compilers

Abstract

We as a community have attempted, multiple times, to speed up languages such as Lua, Python, and Ruby by hand-writing JIT compilers. Sometimes we've had short-term success, but the size, and pace of change, of their standard implementations has proven difficult to keep up with over time.

What if, instead of hand-writing JIT compilers, we could automatically derive them from languages' standard implementations? Doing so would not only preserve the language's semantics but also allow the JIT compiler to evolve at the same pace as the language's implementation.

In this talk I'll show how we've made this a reality. I'll introduce an early-stage open-source system which takes existing C interpreters as input and automatically derives a JIT compiler. Early benchmarks already demonstrate substantial performance improvements, with results continuing to improve as the system matures. I'll walk through the core ideas with concrete examples, highlight some of the surprising technical challenges, and outline where this technology could go next.

Interview:

What is your session about, and why is it important for senior software developers?

I'll be looking at programming language performance, in particular the "awkward squad": dynamically typed languages. With the notable exception of JavaScript, most such languages (e.g. Lua, Python, Ruby) struggle to run programs at high speed. I'll show how a new technique can lessen the performance penalty these languages suffer from -- without undue effort on the language implementer's behalf, and without compromising compatibility.

Why is it critical for software leaders to focus on this topic right now, as we head into 2026?

More people are writing software than ever before, and, from data analysts to AI specialists, they're often using "slow" languages. If we can give people using these languages faster language implementations, they will be more productive, happy, colleagues!

What are the common challenges developers and architects face in this area?

Fear. We'll be looking at compilers in this session: the trade-offs of different designs, the complexity involved in implementing them, and our surprisingly poor understanding of the overall design space. The good news is that there's a lot less to fear than we think!


Speaker

Laurence Tratt

Shopify / Royal Academy of Engineering Research Chair in Language Engineering @King's College London

Laurence Tratt is the Shopify / Royal Academy of Engineering Research Chair in Language Engineering in the Department of Informatics at King’s College London. His research focuses on improving our ability to develop and use software, with a particular focus on performance: how can we make more software run at the speed that its users need and want? As well as formal research publications, he engages with the wider community with open-source contributions, and a widely read blog.

Read more
Find Laurence Tratt at:

Date

Wednesday Mar 18 / 03:55PM GMT ( 50 minutes )

Location

Mountbatten (6th Fl.)

Topics

compilers programming languages performance

Share

From the same track

Session AI/ML

Navigating the Edge of Scale and Speed for Physics Discovery

Wednesday Mar 18 / 10:35AM GMT

Details coming soon.

Speaker image - Thea  Klaeboe Aarrestad

Thea Klaeboe Aarrestad

Particle Physics and Real-Time ML @CERN @ETH Zürich

Session architecture

Not Just I/O: Using Async/Await for Computational Scheduling

Wednesday Mar 18 / 01:35PM GMT

In the past two years I have developed a new query execution engine for Polars, which not only tries to execute as much of your query in parallel as possible, but in a streaming fashion as well, such that you can process data sets which do not fit in memory.

Speaker image - Orson Peters

Orson Peters

Senior Engineer of Query Execution @Polars, (Co-)Author of Stdlib Sort in Rust & Go

Session

Looking Under the Hood: Data Processing Systems Performance Tricks (and How to Apply Them to Your Code)

Wednesday Mar 18 / 02:45PM GMT

Modern data processing systems—databases, analytics engines, vector stores, and stream processors—hide an extraordinary amount of performance engineering beneath their abstractions.

Speaker image - Holger Pirk

Holger Pirk

Associate Professor for Data Management Systems at Imperial College London and Avid Runner — Minimizing Cache Misses, Thread Divergence and Aerobic Decoupling

Session

Vector Search on Columnar Storage

Wednesday Mar 18 / 11:45AM GMT

Managing vector data entails storing, updating, and searching collections of large and multi-dimensional pieces of data. Some believe that this justifies the creation of a new class of data systems specialized for this.

Speaker image - Peter Boncz

Peter Boncz

Professor @CWI, Co-Creator of MonetDB, VectorWise and MotherDuck, Database Systems Researcher, and Entrepreneur