Presentation: Models in Minutes not Months: AI as Microservices

Track: The Practice & Frontiers of AI

Location: Mountbatten, 6th flr.

Duration: 5:25pm - 6:15pm

Day of week: Tuesday

Level: Intermediate - Advanced

Share this on:

What You’ll Learn

  • Learn how Salesforce built an AI platform that scales to thousands of customers.
  • Hear about the fundamental parts of the Einstein Platform, and how it does automated data ingestion, ML, monitoring, alerting, etc.
  • Deepen the understanding on what it takes to bring the first model into production.


Companies are redefining their businesses by building models and learning from data. Whether it is using data science to predict their best sales and marketing targets, automating digital customer interactions using bots, or reducing waste in logistics and manufacturing - Artificial Intelligence will improve your business once deployed.
Serving up good predictions at the right time to drive the appropriate action is hard. It requires setting up data streams, transforming data, building models and delivering predictions. Most approach this by building single models and realizing along the way that data science is only the beginning. The engineering and infrastructure required to maintain a single model and ship the predictions present even more challenges.
Trying to replicate this success for more models or customers is even more difficult. Most approach it by building a handful of additional models, painstakingly addressing challenges by taking one-off approaches to handling increasing volumes of data, differences in data, changes in process, etc. Scaling to 1000s of customers becomes impossible.
At Salesforce we built the Einstein Platform to enable the automation and scaling of Artificial Intelligence to 1000s of customers, each with multiple models. The data ingestion, automated machine learning, instrumentation and intelligent monitoring and alerting make it possible to serve the varied needs of many different businesses. In this talk we will cover the nuts and bolts of the system, and share how we learned to solve for scale and variability with a fully operational Machine Learning platform.


I cannot go to any Data Conference and not hear about the Einstein Platform. Why?


Salesforce is democratizing AI with Einstein. Any company and any business user should be able to use AI, regardless of size.

My team is responsible for embedding advanced AI capabilities in the Salesforce Platform—in fields, objects, workflows, components and more—so everyone will be able to build AI-powered apps. At Salesforce, we're not just building one model, we're building frameworks that can be customized by any Salesforce customer. If QCon were a customer they would have their own model built off of their own data. In fact, Salesforce is a customer of itself—we have our own model built off of our own data.

You don't want to have new data scientists or an army of engineers build a new app every single time - customization should be available through clicks, not code. My team focuses on working with our data scientists within Salesforce to build these new applications so that it is automated and easy to serve all of our customers with their own AI.


What's the motivation for this talk?


There are a lot of talented data scientists and engineers who use data science to build one model for access to some data. That's where things usually end. What’s challenging is pushing the data back out-- a model is useless if the prediction cannot be served up to the person that needs it at the right time to react to it.

I think everyone takes for granted the amount of effort that is involved in this process. This talk is about helping people who are in the process of serving predictions to customers by talking about how Salesforce has done it at scale. My talk is also about helping anybody who's interested in getting started to plan ahead. Otherwise, all this effort that you're putting into building a beautiful model and obtaining data will just end up in a PowerPoint deck somewhere.


Can you give me one example that someone can learn to avoid a pitfall?


One of the easiest things that people tend to overlook is alerting. You should have systems that are able to detect if there is a problem. For example, scores aren't being produced, scores are looking strange, or your data is out of range of what is expected and things can go haywire. That's one thing that can happen at the very end of any process. Then, there are all the obvious ones even on the front-end. Essentially, you need to think about how you allow a user to configure and consume the data and the result in an intuitive way.


Salesforce has deep insights, the metadata, about the data of the customers. How do you do that when you don't have that insight in a generic way?


This talk is not aimed at helping people build something that is as automated as Salesforce. But at the very least it's important to be able to work in some reasonable way with data sets, and focus on those alerts that bring your attention to the right place.


Sometimes engineers do not fully understand the ML math, and scientists do not understand the engineering that makes this work. Is that what you're talking about with alerting and observation?


Yes, that is part of it. For someone who is an engineer entering this arena, assuming upfront that everything will be perfect and that the model will work the first time is not a good assumption. I came from a data scientist background, and I had to learn the other elements. It's also very clear to me that folks who come from engineering either think they will abandon everything they've learned in engineering, or feel that if you build the model once, you're all set.


Who is the persona you're talking to?


I'm talking to either somebody who's designing the system, an architect, data scientist or an engineer that is going into a process. There's the term data scientist and there's the term machine learning engineer, and there needs to be a blended space in between at some point. I don’t think that exists yet. But there are people who want to be in that space, and there are people who want to architect an environment for those people.


What do you want someone to walk away with from your talk?


I want them to walk away with an expanded view of what they need to do in order to get this model out and serving their customers. I want to make sure that people leave there with at least one new thing that is missing from their pipeline today. My hope is that I'm going to share something new.

Speaker: Sarah Aerni

Director, Data Science @Salesforce Einstein

Sarah Aerni is a Director of Data Science at Salesforce Einstein, where she leads teams building AI-powered applications across the Salesforce platform. Prior to Salesforce she led the healthcare & life science and Federal teams at Pivotal. Sarah obtained her PhD from Stanford University in Biomedical Informatics, performing research at the interface of biomedicine and machine learning. She also co-founded a company offering expert services in informatics to both academia and industry.

Find Sarah Aerni at

Last Year's Tracks

Monday, 5 March

Tuesday, 6 March

Wednesday, 7 March