Taking AI from lab to business is notoriously difficult. It is not just about picking which model flavor of the day to use. More important is making every step of the process reliable and productive. From training, experimentation, deployment, validation and everything in between, there are lots of moving pieces.
This talk will provide a high level overview of LinkedIn’s AI ecosystem, and then zoom in on the data platform underneath it: an open source database called Venice which we’ve been running in production for 7 years.
Building a data platform specifically tailored for AI requires some careful considerations. Among other things, it must support rapid experimentation, high throughput ingestion, and low latency queries for online inference applications.
You will come out of this session with an understanding of these various challenges, what we did to solve them, and how we pivoted along the way to keep up with changing workloads and requirements.
Speaker
Felix GV
Principal Staff Engineer @LinkedIn
Felix joined LinkedIn's data infrastructure team in 2014, first working on Voldemort, the predecessor of Venice. Over the years, Felix participated in all phases of the development lifecycle of Venice, from requirements gathering and architecture, to implementation, testing, roll out, integration, stabilization, scaling and maintenance.