You are viewing content from a past/completed QCon -

Track: Modern Learning Systems

Location: Mountbatten, 6th flr.

Day of week:

Breakthroughs in fundamental algorithms, hardware and tooling mean that modern learning systems look very different to those deployed just a few years ago. In this session we'll cover the practical, real world use of the latest machine learning technologies in production environments.

We'll learn about the technical details of deep learning and artificial intelligence products from the people who built and deployed them in extremely large scale, high profile systems. We'll hear about the latest libraries and toolkits, which make prototyping and productionizing new ideas easier and quicker. And we'll learn about how we can make use best practices from software engineering to make this historically fragile and costly area of software development more rigorous and reliable.

Track Host: Mike Lee Williams

Director of Research @FastForwardLabs
Mike Lee Williams is Director of Research at Fast Forward Labs, an applied machine intelligence lab in New York City. He builds prototypes that bring the latest ideas in machine learning and AI to life, and works with Fast Forward Labs's clients to help them understand how to make use of these new technologies. He has a PhD in astrophysics from Oxford.

Building Robust Machine Learning Systems

Machine learning is powering huge advances in products that we know and love. As a result, ever growing parts of the systems we build are changing from the deterministic to the probabilistic. The accuracy of machine learning applications can quickly deteriorate in the wild without strategies for testing models, instrumenting their behaviour and the ability to introspect and debug incorrect predictions. Wouldn't it be nice to have the best of the software engineering and machine learning worlds when building our systems? This session will take an applied view from my experience at Ravelin, and will provide useful practices and tips to help ensure your machine learning systems are robust, well audited, avoid embarrassing predictions, and introspectable, so you can hopefully sleep a little better at night.

Stephen Whitworth , Co-founder and Machine Learning Engineer @Ravelin

Deep Learning @Google Scale: Smart Reply in Inbox

Anjuli will describe the algorithmic, scaling and deployment considerations involved in an extremely prominent application of cutting-edge deep learning in a user-facing product: the Smart Reply feature of Google Inbox.

Anjuli Kannan, Software Engineer @GoogleBrain

Products And Prototypes With Keras

In this talk Micha will show how to build a working product with Keras, a high level deep learning framework. He'll start by explaining deep learning at a conceptual level, before describing the product requirements. He'll then show code and discuss design decisions that demonstrate how to train and deploy the model. In the process, he'll place Keras in context in the deep learning framework ecosystem, that includes Tensorflow, MXNet and Theano.

Micha Gorelick, Research Engineer @FastForwardLabs, Keras Contributor

DSSTNE: Deep Learning at Scale

DSSTNE (Deep Sparse Scalable Tensor Network Engine) is a deep learning framework for working with large sparse data sets. It arose out of research into the use of deep learning for product recommendations after we realized existing frameworks were limited to a single GPU or data-parallel scaling and that they handled sparse datasets incredibly inefficiently. DSSTNE provides nearly free sparse input layers for neural networks and stores such data in a CSR-like format that allowed us to train on data sets that would otherwise have consumed Terabytes of memory and/or bandwidth. Further, DSSTNE implements a new approach to model parallel training that automatically minimizes communication costs such that for a GM204 GPU, one can attain nearly 100% efficient scaling given sufficiently large layer width (~1000 units per GM204 in use). In mid-2016 Amazon open-sourced DSSTNE in exactly the same form as it is used in production in the hopes of advancing the use of deep learning for large sparse data sets wherever the may be.

Scott Le Grand, Deep Learning Engineer @Teza (ex-Amazon, ex-NVidia)

Julia: A Modern Language For Modern ML

Julia is a modern high-performance, dynamic language for technical computing, with many features which make it ideal for machine learning, including just-in-time (JIT) compilation, multiple dispatch, metaprogramming and easy to use parallelism. This talk will demonstrate these features, and showcase a some of the cutting edge machine learning packages that available in the Julia ecosystem, as well as the tools to deploy these models at large scale.

Dr. Viral Shah, Co-Founder and CEO of Julia Computing and a Co-Creator of the Julia language
Dr. Simon Byrne, Quantitative Software Developer @JuliaComputing

Mini Workshop: Hands-on Deep Learning

In this interactive workshop, Micha Gorelick will lead you through modification an existing deep learning product implemented in Keras. If you plan to run the code, please come with a well-charged laptop battery! And if you get the chance, please also download the python packages and data we'll be working with using the following three commands:

Micha Gorelick, Research Engineer @FastForwardLabs, Keras Contributor
Mike Lee Williams , Director of Research @FastForwardLabs

Tracks

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.