As large language models (LLMs) emerge from the realm of proof-of-concept (POC) and into mainstream production, the demand for effective architectural strategies intensifies. This session delves into the intricacies of designing and implementing intelligent systems powered by these powerful tools, drawing upon practical insights gained from real-world deployments.
We'll embark on a journey to unravel the complexities of LLMs, delving into the diverse array of patterns and techniques that can be employed to harness their capabilities. From fine-tuning and zero-shot learning to context-aware modeling and prompt engineering, we'll explore a spectrum of approaches to suit various problem domains.
Along the way, we'll uncover the potential pitfalls that can arise when integrating LLMs into existing architectures. We'll discuss latency considerations, tokenization challenges, and the need for comprehensive guardrails to ensure the safe and reliable operation of these systems.
To effectively manage and scale LLM-driven architectures, we'll explore the role of new technologies and tools, such as LLMOps platforms, multi-modal models, and Llama-indexing techniques. We'll also address the need for continuous learning and upskilling within teams to adapt to the evolving landscape of intelligent systems.
Join us as we explore the transformative potential of large language models and gain practical guidance on architecting intelligent systems for the future. This session is designed for directors of data/ML, data and ML architects, data scientists, ML engineers, data engineers, product managers, and UX designers seeking to navigate the ever-expanding realm of LLMs.
Interview:
What's the focus of your work these days?
The last year has been a journey into building and enabling products with Large Language Models. This involves everything from proof of concepts, to bringing systems into production in the enterprise landscape with different variants of LLM usage, upskilling teams and learning about what it takes to monitor and observe LLMs.
What's the motivation for your talk at QCon London 2024?
There is a lot that has happened in the past 16-18 months in the field of AI. 2023 was the year of POCs across organizations around the world, 2024 will be the year of bringing the successful POCs into production. This has an impact on the architecture and design of the systems, and my motivation is to share my learnings with the community and learn from the conversations, as this area of bringing LLMs into production is fairly nascent.
How would you describe your main persona and target audience for this session?
This session is designed for directors of data/ML, data and ML architects, data scientists, ML engineers, data engineers, product managers, and UX designers seeking to navigate the ever-expanding realm of LLMs.
Is there anything specific that you'd like people to walk away with after watching your session?
A sneak peek into different architecture patterns one could implement to enable LLMs in their products.
Is there anything interesting that you learned from a previous QCon?
There is so much to engineering systems, which include people, processes, and technologies, and you are going to constantly experience change, and it's very important to accept that.
Speaker
Nischal HP
Vice President of Data Science @Scoutbee, Decade of Experience Building Enterprise AI
Engineering leader with over 13 years of experience specializing in infrastructure, services, and products within the realm of Artificial Intelligence. My journey has taken me through diverse domains such as Algorithmic trading, E-commerce, Credit risk scoring for students, Medical technology, Insurance technology, and Supply chain.
My expertise extends from initiating zero-to-one projects to effectively scaling engineering organizations, leading teams comprising 30+ engineers and data scientists, including senior individual contributors (Principal and Staff), Engineering Managers, and Product Managers.
I excel in fostering collaboration across large cross-functional teams, uniting Product Managers, Technical Program Managers, Data Scientists, Data Engineers, Product Designers, and UX Researchers to harmonize efforts among multi-disciplinary teams and senior leadership.
Furthermore, I possess extensive experience in the recruitment and mentoring of senior individual contributors and engineering managers, instilling a culture of engineering excellence within organizations.
One aspect of my journey that I take pride in is not just the successes but also the privilege of learning from my failures. Some of the areas where I have faced and grown from failure include:
- Identifying a product-market fit with my own bootstrapped company.
- Scaling culture from a small team to a larger one.
- Establishing value streams for AI-driven products.
- Resolving conflicts early on with senior staff members to prevent undesired consequences for the team.
- Identifying the necessary processes and controls for scaling organizations and delivering continuous value.