As large language models (LLM) become more prevalent in highly regulated industries, dealing with sensitive data and ensuring the security and ethical design of machine learning (ML) models is paramount. This session explores best practices for ensuring the security of LLM and ML models within these demanding environments. We will look at securing against common attacks such as prompt injection and sensitive information disclosure, what to do when your models hallucinate and the broader impact AI can bring to your organisation.
Attendees will gain valuable insights into common pitfalls inherent in the development of ML models and practical strategies to avoid them. By the end of the talk, participants will be equipped to navigate the nuanced landscape of LLM and Generative AI, armed with the knowledge to secure their models, design responsibly, and sidestep potential challenges.
What's the focus of your work these days?
Stefania Chaplin: The majority of my day is spent on working with enterprise organizations securing and optimizing their software delivery life cycles and improving technical productivity. A lot of my time is spent understanding the challenges associated with highly regulated industries, public and private sector. When not at a laptop, I enjoy speaking at conferences about Security and DevSecOps or running negotiation workshops for underrepresented groups.
Azhir Mahmood: I’m a Deep Learning Research Scientist at PhysicsX where everyday involves pushing the boundaries of what is possible with AI / Machine Learning. Being at the forefront of AI means I have to keep up to date with the latest research and then find innovative ways to solve problems for our clients.
At PhysicsX, I’m involved in every part of our product lifecycle from assessing our clients' problems, developing an innovative new solution and finally integrating and delivering the solution.
Prior to my current role, I spent a few years building a venture-backed AI start-up and then joined UCL as a Doctoral Candidate. During my time, I have been at the forefronts of both industry and research. Now I take all those learnings to help organizations scale and unlock new possibilities by integrating AI solutions.
What's the motivation for your talk at QCon London 2024?
Both Stefania and Azhir come from two distinct specialties: cyber security and AI. We discovered that Machine Learning Engineers / AI Scientists are not considering security vulnerabilities and those in security have not anticipated the sudden adoption of AI, security and compliance usually lags technological advancements. By bridging the gap between these two distinct yet complementary disciplines, we can ensure AI is adopted securely and responsibly within organizations.
How would you describe your main persona and target audience for this session?
Executives, Senior Managers and Technical Leaders are increasingly looking to integrate LLM and Generative AI into their organization. Those who haven't yet are increasingly being asked when and how their organization will adopt these technologies.
We provide a framework to let you confidently, securely, and responsibly adopt these technologies into your organization. Helping you to unlock new possibilities and accelerating your growth.
Is there anything specific that you'd like people to walk away with after watching your session?
We hope that attendees will leave our session feeling inspired and confident in harnessing AI to discover fresh opportunities within their organization.
Here are three essential insights from this session:
1. Embrace best practices to ensure the security of your LLM / AI models.
2. Exercise responsible thinking in the design of your AI architecture.
3. Steer clear of prevalent pitfalls, including Model Bias and Data Leakage.
Is there anything interesting that you learned from a previous QCon?
As a frequent attendee of QCon London, one of my favorite things is the conversations with attendees! QCon always attracts innovative leadership and learning how organizations intend to adopt and scale new practices is always an interesting conversation to have
Solutions Architect @GitLab
Stefania’s (aka DevStefOps) experience as a Solutions Architect within DevSecOps, Security Awareness and Software Supply Chain Management means she's helped countless organizations understand and implement security throughout their SDLC. As a python developer at heart, Stefania enjoys optimizing and improving operational efficiency by scripting & automating processes and creating integrations. She has spoken at many conferences including RSA, DevOps Enterprise Summit, Blackhat, Qcon, ADDO and Women in DevOps. When not at a computer, Stefania enjoys surfing, yoga and looking after all her tropical plants
Research Scientist @PhysicsX
Azhir’s expertise spans both the world of deep technical research and the dynamic landscape of start-ups. After graduating from Cambridge in Astrophysics he went on to co-found a venture-backed start-up within the heavily regulated medical device sector. Having experienced first hand how to leverage AI for the real world he decided to pursue a PhD at UCL advancing AI for heavily regulated industries. Currently, Azhir is a Research Scientist at PhysicsX where he takes the latest research in AI and Machine Learning and develops them into business solutions for his clients.