LLM and Generative AI for Sensitive Data - Navigating Security, Responsibility, and Pitfalls in Highly Regulated Industries

As large language models (LLM) become more prevalent in highly regulated industries, dealing with sensitive data and ensuring the security and ethical design of machine learning (ML) models is paramount. This session explores best practices for ensuring the security of LLM and ML models within these demanding environments. We will look at securing against common attacks such as prompt injection and sensitive information disclosure, what to do when your models hallucinate and the broader impact AI can bring to your organisation.

Attendees will gain valuable insights into common pitfalls inherent in the development of ML models and practical strategies to avoid them. By the end of the talk, participants will be equipped to navigate the nuanced landscape of LLM and Generative AI, armed with the knowledge to secure their models, design responsibly, and sidestep potential challenges.

What's the focus of your work these days?

Stefania Chaplin: The majority of my day is spent on working with enterprise organizations securing and optimizing their software delivery life cycles and improving technical productivity. A lot of my time is spent understanding the challenges associated with highly regulated industries, public and private sector. When not at a laptop, I enjoy speaking at conferences about Security and DevSecOps or running negotiation workshops for underrepresented groups. 

Azhir Mahmood: I’m a Deep Learning Research Scientist at PhysicsX where everyday involves pushing the boundaries of what is possible with AI / Machine Learning. Being at the forefront of AI means I have to keep up to date with the latest research and then find innovative ways to solve problems for our clients. 

At PhysicsX, I’m involved in every part of our product lifecycle from assessing our clients' problems, developing an innovative new solution and finally integrating and delivering the solution. 

Prior to my current role, I spent a few years building a venture-backed AI start-up and then joined UCL as a Doctoral Candidate. During my time, I have been at the forefronts of both industry and research. Now I take all those learnings to help organizations scale and unlock new possibilities by integrating AI solutions. 

What's the motivation for your talk at QCon London 2024?

Both Stefania and Azhir come from two distinct specialties: cyber security and AI. We discovered that Machine Learning Engineers / AI Scientists are not considering security vulnerabilities and those in security have not anticipated the sudden adoption of AI, security and compliance usually lags technological advancements. By bridging the gap between these two distinct yet complementary disciplines, we can ensure AI is adopted securely and responsibly within organizations. 

How would you describe your main persona and target audience for this session?

Executives, Senior Managers and Technical Leaders are increasingly looking to integrate LLM and Generative AI into their organization. Those who haven't yet are increasingly being asked when and how their organization will adopt these technologies. 

We provide a framework to let you confidently, securely, and responsibly adopt these technologies into your organization. Helping you to unlock new possibilities and accelerating your growth.

Is there anything specific that you'd like people to walk away with after watching your session?

We hope that attendees will leave our session feeling inspired and confident in harnessing AI to discover fresh opportunities within their organization.
Here are three essential insights from this session:
1. Embrace best practices to ensure the security of your LLM / AI models.
2. Exercise responsible thinking in the design of your AI architecture.
3. Steer clear of prevalent pitfalls, including Model Bias and Data Leakage.
 

Is there anything interesting that you learned from a previous QCon?

As a frequent attendee of QCon London, one of my favorite things is the conversations with attendees! QCon always attracts innovative leadership and learning how organizations intend to adopt and scale new practices is always an interesting conversation to have 


Speaker

Stefania Chaplin

Solutions Architect @GitLab

Stefania’s (aka DevStefOps) experience as a Solutions Architect within DevSecOps, Security Awareness and Software Supply Chain Management means she's helped countless organizations understand and implement security throughout their SDLC. As a python developer at heart, Stefania enjoys optimizing and improving operational efficiency by scripting & automating processes and creating integrations. She has spoken at many conferences including RSA, DevOps Enterprise Summit, Blackhat, Qcon, ADDO and Women in DevOps. When not at a computer, Stefania enjoys surfing, yoga and looking after all her tropical plants

Read more

Speaker

Azhir Mahmood

Research Scientist @PhysicsX

Azhir’s expertise spans both the world of deep technical research and the dynamic landscape of start-ups. After graduating from Cambridge in Astrophysics he went on to co-found a venture-backed start-up within the heavily regulated medical device sector. Having experienced first hand how to leverage AI for the real world he decided to pursue a PhD at UCL advancing AI for heavily regulated industries. Currently, Azhir is a Research Scientist at PhysicsX where he takes the latest research in AI and Machine Learning and develops them into business solutions for his clients.

Read more
Find Azhir Mahmood at:

Date

Monday Apr 8 / 02:45PM BST ( 50 minutes )

Location

Mountbatten (6th Fl.)

Topics

AI/ML security Data Responsible AI

Share

From the same track

Session AI/ML

Retrieval-Augmented Generation (RAG) Patterns and Best Practices

Monday Apr 8 / 10:35AM BST

The rise of LLMs that coherently use language has led to an appetite to ground the generation of these models in facts and private collections of data.

Speaker image - Jay Alammar
Jay Alammar

Director & Engineering Fellow @Cohere & Co-Author of "Hands-On Large Language Models"

Session AI/ML

Navigating LLM Deployment: Tips, Tricks, and Techniques

Monday Apr 8 / 11:45AM BST

Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence.

Speaker image - Meryem Arik
Meryem Arik

Co-Founder @TitanML

Session AI/ML

Reach Next-Level Autonomy with LLM-Based AI Agents

Monday Apr 8 / 01:35PM BST

Generative AI has emerged rapidly since the release of ChatGPT, yet the industry is still at its very early stage with unclear prospects and potential.

Speaker image - Tingyi Li
Tingyi Li

Enterprise Solutions Architect @AWS

Session AI/ML

How Green is Green: LLMs to Understand Climate Disclosure at Scale

Monday Apr 8 / 05:05PM BST

Assessment of the validity of climate finance claims requires a system that can handle significant variation in language, format, and structure present in climate and financial reporting documentation, and knowledge of the domain-specific language of climate science and finance.

Speaker image - Leo Browning
Leo Browning

First ML Engineer @ClimateAligned

Session AI/ML

The AI Revolution Will Not Be Monopolized: How Open-Source Beats Economies of Scale, Even for LLMs

Monday Apr 8 / 03:55PM BST

With the latest advancements in Natural Language Processing and Large Language Models (LLMs), and big companies like OpenAI dominating the space, many people wonder: Are we heading further into a black box era with larger and larger models, obscured behind APIs controlled by big t

Speaker image - Ines Montani
Ines Montani

Co-Founder & CEO @Explosion, Core Developer of spaCy