Reach Next-Level Autonomy with LLM-Based AI Agents

Generative AI has emerged rapidly since the release of ChatGPT, yet the industry is still at its very early stage with unclear prospects and potential. The cool concept of building AI agents with Foundation Models (FMs) as its core controller has stirred up heated discussions and become one of the mainstream bets. While most people are still wondering: What exactly is an AI Agent? What can it bring and is it going to be the future? What are the gaps and challenges between demos to production? In this talk, we’ll apply the first principle and boil down AI Agent, exploring how it extends the frontiers of Generative AI applications and leads to next-level autonomy in combination with your enterprise data.

What's the focus of your work these days?

As an Enterprise Solutions Architect at Amazon Web Services based in Stockholm, Sweden, my role involves collaborating with some of the largest and most complex enterprise customers to architect, design, and develop cloud-optimized infrastructure solutions. My work is dedicated to accelerating the realization of business outcomes for my customers. Recently, my focus has shifted towards assisting companies across various industries in harnessing the potential of AI/ML and Generative AI technologies. Specifically, I am engaged in designing and implementing knowledge-based AI agents within the manufacturing and financial sectors for specialized scenarios. These initiatives aim to secure higher returns on investment, as well as achieve enhanced automation and operational excellence.

What's the motivation for your talk at QCon London 2024?

AI agent is one of the future trends for Generative AI. It is relevant for all developers to have an overview of it and contribute to the community, technologies and industries. The motivation for this talk is to democratize AI Agent for developers and give some practical guidance on how to build Generative AI applications for various use cases and tasks. 

How would you describe your main persona and target audience for this session?

Data scientists, ML Engineers, Software Developers, CTOs, Solutions Architects, Product Owners.
No specific prerequisites are necessary, although attendees with a foundational understanding of Generative AI concepts such as Foundation Models, Multi-Modality and LangChain, will find the talk more accessible.

Is there anything specific that you'd like people to walk away with after watching your session?

Some main takeaways can be:
1. Foundation Models (FMs) offer significant reasoning capabilities but have limitations, notably their inability to interact with external systems and lack of access to current knowledge sources, where AI Agent presents a promising solution to address these limitations.

2. Integrating AI Agents with FMs facilitates the development of Generative AI applications that can run tasks for a wide range of use cases and deliver up-to-date answers based on enterprise knowledge sources. However, this still requires careful consideration of trade-offs during practical and scalable adoptions given the current stage of AI Agent development. 


Tingyi Li

Enterprise Solutions Architect @AWS

Tingyi Li is an enterprise solutions architect, a public speaker, and a thought leader in the field of artificial intelligence and machine learning. In her current role as Enterprise Solutions Architect at Amazon Web Services, she leads strategic engagements with major Nordics enterprises on their cloud-optimized digital transformations. As the founder and leader of the AWS Nordics Generative AI community, her work is dedicated to democratizing Generative AI, and transforming how industries leverage the technologies to drive innovations and unlock business value. 

Tingyi is a frequent speaker at premier conferences globally including AWS Re:Invent, QCon, TDC, IEEE WIE Leadership Summit etc. and is the featured instructor for the flagship GenAI courses at the University of Oxford, where she continues to impart cutting-edge knowledge and inspire the next generation of tech leaders. Prior to AWS, she worked as Data & AI Engineer at Intel, Foxconn and Huawei, building large-scale intelligent industrial information and data integration systems with advanced data pipelining and AI/ML technologies. In her spare time, she also works as a part-time illustrator who writes novels and plays the piano.

Read more
Find Tingyi Li at:


Monday Apr 8 / 01:35PM BST ( 50 minutes )


Mountbatten (6th Fl.)


AI/ML Generative AI Innovation AI Agent AI applications


From the same track

Session AI/ML

Retrieval-Augmented Generation (RAG) Patterns and Best Practices

Monday Apr 8 / 10:35AM BST

The rise of LLMs that coherently use language has led to an appetite to ground the generation of these models in facts and private collections of data.

Speaker image - Jay Alammar

Jay Alammar

Director & Engineering Fellow @Cohere & Co-Author of "Hands-On Large Language Models"

Session AI/ML

Navigating LLM Deployment: Tips, Tricks, and Techniques

Monday Apr 8 / 11:45AM BST

Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence.

Speaker image - Meryem Arik

Meryem Arik

Co-Founder @TitanML

Session AI/ML

LLM and Generative AI for Sensitive Data - Navigating Security, Responsibility, and Pitfalls in Highly Regulated Industries

Monday Apr 8 / 02:45PM BST

As large language models (LLM) become more prevalent in highly regulated industries, dealing with sensitive data and ensuring the security and ethical design of machine learning (ML) models is paramount.

Speaker image - Stefania Chaplin

Stefania Chaplin

Solutions Architect @GitLab

Speaker image - Azhir Mahmood

Azhir Mahmood

Research Scientist @PhysicsX

Session AI/ML

How Green is Green: LLMs to Understand Climate Disclosure at Scale

Monday Apr 8 / 05:05PM BST

Assessment of the validity of climate finance claims requires a system that can handle significant variation in language, format, and structure present in climate and financial reporting documentation, and knowledge of the domain-specific language of climate science and finance.

Speaker image - Leo Browning

Leo Browning

First ML Engineer @ClimateAligned

Session AI/ML

The AI Revolution Will Not Be Monopolized: How Open-Source Beats Economies of Scale, Even for LLMs

Monday Apr 8 / 03:55PM BST

With the latest advancements in Natural Language Processing and Large Language Models (LLMs), and big companies like OpenAI dominating the space, many people wonder: Are we heading further into a black box era with larger and larger models, obscured behind APIs controlled by big t

Speaker image - Ines Montani

Ines Montani

Co-Founder & CEO @Explosion, Core Developer of spaCy