How Green is Green: LLMs to Understand Climate Disclosure at Scale

Assessment of the validity of climate finance claims requires a system that can handle significant variation in language, format, and structure present in climate and financial reporting documentation, and knowledge of the domain-specific language of climate science and finance. In this talk we will go through our process of engineering a software platform based around retrieval augmented generation (RAG) with large language models (LLMs) to accelerate this assessment, from initial prototype to first live product, through the context of our startup growth, and assess the challenges and impacts of engineering a solution with LLMs in the Climate Finance space.

Interview:

What's the focus of your work these days?

Evaluation of LLM model quality in the time and resource-constrained environment of an early-stage startup, where LLMs are primarily being used for zero-shot question answering (as opposed to chat). How do we assess what is good vs what is valuable to users?

What's the motivation for your talk at QCon London 2024?

Applying AI to real-world problems is hard, and the approach of the big players in the industry is obviously not the right fit for small, domain-focused work. I would love to build dialogue around what a strong technical approach with a small team looks like and how to balance deep technical efforts with proving value to users, which is so vital to startups.

How would you describe your main persona and target audience for this session?

I would love to spark discussions with other technical people using LLMs in a startup context, especially those who are looking to solve problems outside the AI toolchain ecosystem. I would also love to engage with anyone applying a tech lens to climate problems. I hope that these kinds of people find my talk engaging.

Is there anything specific that you'd like people to walk away with after watching your session?

I would hope that anyone listening walks away with enough of a baseline understanding of how we are using LLMs to answer domain-specific questions, to come up to me afterward and discuss the similarities and differences with their domain, company, and approach.


Speaker

Leo Browning

First ML Engineer @ClimateAligned

Leo Browning is an ML engineer who has worked with AI for 5 years in progressively earlier-stage companies. After completing a PhD in Physics on the electronic properties of Nanoscale Network Systems, he transitioned into working as an AI/ML engineer and hasn’t looked back. He has worked in projects across esports prediction, large-scale people movement for transport planning, financial modeling, knowledge graphs, and climate finance. He is passionate about discussing the hard work required to build software that makes the most of the amazing advances in AI, and loves nothing more than cross-pollination of ideas with other builders and thinkers in the space.

Read more

Date

Monday Apr 8 / 05:05PM BST ( 50 minutes )

Location

Fleming (3rd Fl.)

Topics

AI/ML LLM Startup climate

Share

From the same track

Session AI/ML

Retrieval-Augmented Generation (RAG) Patterns and Best Practices

Monday Apr 8 / 10:35AM BST

The rise of LLMs that coherently use language has led to an appetite to ground the generation of these models in facts and private collections of data.

Speaker image - Jay Alammar

Jay Alammar

Director & Engineering Fellow @Cohere & Co-Author of "Hands-On Large Language Models"

Session AI/ML

Navigating LLM Deployment: Tips, Tricks, and Techniques

Monday Apr 8 / 11:45AM BST

Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence.

Speaker image - Meryem Arik

Meryem Arik

Co-Founder @TitanML

Session AI/ML

Reach Next-Level Autonomy with LLM-Based AI Agents

Monday Apr 8 / 01:35PM BST

Generative AI has emerged rapidly since the release of ChatGPT, yet the industry is still at its very early stage with unclear prospects and potential.

Speaker image - Tingyi Li

Tingyi Li

Enterprise Solutions Architect @AWS

Session AI/ML

LLM and Generative AI for Sensitive Data - Navigating Security, Responsibility, and Pitfalls in Highly Regulated Industries

Monday Apr 8 / 02:45PM BST

As large language models (LLM) become more prevalent in highly regulated industries, dealing with sensitive data and ensuring the security and ethical design of machine learning (ML) models is paramount.

Speaker image - Stefania Chaplin

Stefania Chaplin

Solutions Architect @GitLab

Speaker image - Azhir Mahmood

Azhir Mahmood

Research Scientist @PhysicsX

Session AI/ML

The AI Revolution Will Not Be Monopolized: How Open-Source Beats Economies of Scale, Even for LLMs

Monday Apr 8 / 03:55PM BST

With the latest advancements in Natural Language Processing and Large Language Models (LLMs), and big companies like OpenAI dominating the space, many people wonder: Are we heading further into a black box era with larger and larger models, obscured behind APIs controlled by big t

Speaker image - Ines Montani

Ines Montani

Co-Founder & CEO @Explosion, Core Developer of spaCy