Share it on TwitterShare it on LinkedInShare it on Facebook

With AI taking over the business world by storm, many expect that (ESG) reporting will be no exception.

Thanks to the advancements in Artificial Intelligence (AI), and in particular Large Language Models (LLMs), ESG managers will be able to augment significant parts  of their day-to-day operations, in particular related to the often tedious work associated with creating ESG reports - either for compliance purposes, or to satisfy the ever-increasing ESG appetite of investors, managers, banks, and other key stakeholders.  

This is especially critical in a market where the demand for skills in the field of sustainability and adjacent fields is surging. According to the World Economic Forum Future of Jobs Report (2023),  Sustainability Specialists are one of the fastest-growing job segments out there (trailing only AI specialists). But there simply isn’t enough talent to meet the growing demand. This is why firms and investors do not only need to think about satisfying their (increasingly pressing) sustainability hiring needs, but also about how to boost the productivity of their existing workforce - in a scalable way. 

In ESG, like in many other industries, AI is often touted as a game-changer, promising to revolutionize the way ESG data is collected, analyzed, and reported, and to supercharge the productivity of sustainability knowledge workers. 

However, amidst the hype, it is crucial to stay realistic, and address some of the fast-emerging concerns surrounding the reliability of AI-generated results. In this delicate landscape, building reliable and trustworthy systems is critical  - especially when the goal is to avoid greenwashing, and where reported data is increasingly subject to the scrutiny of auditors, investors, and financial regulators.  

Technology always comes with limitations, some of which might not be apparent to non-technical users. We’ll try to spell out some of them in this post. 

A disclaimer: as we won’t be able to cover everything in this article, if you are interested in this topic, we recommend you register for our upcoming webinar.

Sam King, Briink’s CTO, will show examples of practical use cases of AI for ESG reporting, as well as give you a sneak peek into our soon-to-be-released  AI sustainable finance co-pilot  

Beyond the hype: applying LLMs to ESG 

Let’s take a step back and assess our vocabulary: when we say “AI”, what do we really mean? 

Truth is, Artificial Intelligence (AI) is an umbrella term for a broad class of technological solutions. LLMs (Large Language Models) are some of them. These are models that are trained on vast amounts of textual data, and leverage Natural Language Processing (NLP) techniques to extract insights from vast amounts of unstructured (textual) data. 

Here unstructured simply refers to data that is not stored or organized according to a conventional data model or schema. For example, financial data stored in spreadsheets is structured data, while an email or a sustainability report contains unstructured data.

Most of us are familiar with GPT-3 or GPT-4 models by OpenAI, or with some of the other language models that are being released into the market since early 2023. These models harness NLP techniques to generate (create) new data (in this case, text passages) from large volumes of  training data. 

The application of LLMs to ESG reporting has garnered substantial additional attention in recent months. The reason? The ability of LLMs to process vast amounts of data quickly and efficiently has a massive potential to transform the way ESG data is collected, but also the way ESG research is conducted, and the actual creation of sustainability and/or  ESG reports.  

In this article we won’t go too much into the details of most applications of LLMs to ESG reporting (though we have written about it here and here, and we’ll also cover some more concrete applications in the webinar). 

Summing up the main argument of those pieces, ESG analysts are now faced with a growing amount of tedious and time-intensive tasks, especially in relation to ESG reporting. Here are some of them (feel free to write me if I missed anything!):

  • Reading 100+-page documents to retrieve information about company’s sustainability and social policies.
  • Researching and trying to interpret the latest regulatory guidance.
  • Drafting different versions of the same report to convey key information on the sustainability of a company that can appeal to multiple stakeholders (which often involves simply re-working, re-organizing and summarizing existing information).
  • Tirelessly engaging with portfolio companies and other stakeholders to enquire about flawed or missing information, or to send deadline reminders. 

LLMs can help with most if not all of these tasks. They can retrieve information from large documents, summarize regulations and/or existing reports (even adapt them to different audiences and levels of technical ESG knowledge), and streamline communication with companies. 

By automating repetitive work, LLMs can free up valuable time for ESG professionals to focus on higher-level tasks, such as interpreting results, formulating strategies, and engaging stakeholders effectively.

In principle, this sounds fantastic. But skeptics might still raise some (in many cases, reasonable) concerns. 

Which makes us circle back to the main topic I wanted to address with this post:

AI in ESG: the concerns (and some solutions)

AI hallucination and reliability

One of the primary concerns surrounding AI in ESG reporting is the potential for AI models to produce false or unreliable results with false confidence, or provide answers when there are none - a particular challenge in ESG reporting where a lot of the underlying data simply hasn't been collected before.

Sometimes, it’s important for models to be designed in such a way that they are able to recognize that they “don’t know what they don’t know”. 

When navigating uncertainty, generative models are sometimes biased to prefer to provide a false answer at all costs. 

There have been instances where AI models have generated outputs that are seemingly accurate, but lack factual basis or context. This phenomenon, known as AI hallucination, can lead to misleading conclusions which can ultimately snowball into misinformed decision-making. This raises particular concerns for users dealing with ESG regulations, which sometimes have uncertain interpretation and/or are subject to changes.

While there are ways to limit AI hallucinations, it is currently impossible to avoid them completely. In these cases, making sure there is always a “human-in-the-loop”, for example by empowering the user to check the information provided through smart interface design, or by ensuring human experts are within reach and can address interpretation questions in a timely manner. 

Briink developed custom tools that can, for example and among other things, link the sources of information, thus enabling users to double-check for themselves the evidence provided. This also helps avoiding that uncanny “black-box” feeling many of us experience with AI tools.  

Moreover, it’s important to ensure that the model prompts are co-designed with and subject to continuous scrutiny of actual subject-matter experts, such as ESG or sustainable finance managers.

At the end of the day, ESG reports often need to be audited, which means that the information provided must be trustworthy and transparent. While AI can streamline the reporting process, it is crucial to maintain human oversight to ensure the integrity and accuracy of the data.

Safeguarding sensitive company data

Another big source of concerns for uncertain early AI adopters has to do with privacy and the potential involuntary disclosure of sensitive data. 

While many ESG reports are public, finding data to report on entails browsing among large volumes of sensitive information. As AI processes this information (e.g. to retrieve or summarize relevant ESG info from company’s internal documents), there is a legitimate worry that it could fall into the wrong hands, leading to breaches of privacy and malicious misuse.

For fund managers that wish to increase engagement with ESG of target investees or portfolio companies, transparency and accountability are key in building trust with them. Establishing clear guidelines and disclosure practices regarding how their data will be used, ensuring compliance with data protection regulations, and obtaining informed consent can help mitigate concerns and encourage companies to participate in AI-driven ESG reporting initiatives.

To address this concern, robust data governance practices and secure data handling protocols are of paramount importance. Reliable AI developers prioritize data security by implementing strong encryption methods, access controls, and data anonymization techniques. ESG managers or fund managers looking to start a relationship with a provider that use AI must make sure the right agreements and contractual obligations to protect the confidentiality and integrity of company data are established, even beyond national regulations (e.g., not only ensure that national privacy regulations - like the GDPR in case of Europe - are respected, but check that the provider goes to the extra mile by adhering to further voluntary standards like SOC 2).

However, in some cases simply adhering to generic data protection policies might not be enough: different funds and companies might have different data requirements to match their investment strategies, and/or in order to comply with country-specific regulations. A strategy to achieve more customization involves finding a provider that doesn’t simply rely on an off-the-shelf GPT-powered solution, but gives you the ability to toggle between different types of LLMs and fine-tune some data protection specifications (for example, the data retention period, or the ability to anonymize information in the output, etc.), so that your needs are met fully. 

Bottom line: It’s all about setting realistic expectations.

Ultimately, while AI has shown promise in automating certain aspects of ESG reporting, we should stay grounded in reality.

I think most criticism of applications of AI comes from a place of holding excessive expectations. And some of this excessive expectation is actually the fault of misleading marketing tactics, depicting the smallest AI feature as the must-have, all-encompassing solution to your ESG needs.

As best practices, definitions, and even regulations around ESG reporting evolve before our eyes, AI-powered software needs to be flexible enough to adapt to a fund’s or a company’s specific ESG requirements. Out-of-the-box AI tools will not be the panacea to all the problems and roadblocks your ESG team currently faces. 

And while it’s perfectly possible to embed AI-powered solutions into your current existing processes, it’s essential that you still consult with an expert, who can also provide some of the training resources that can help you boost the productivity of your ESG team. 

AI and ESG managers, a fruitful partnership? Yes, but with some caveats.

The potential of AI in ESG reporting is undeniable. However, it is crucial to approach it with realistic expectations and acknowledge its intrinsic limitations. AI should be seen as a partner to ESG managers, augmenting their capabilities and streamlining their work, rather than a replacement for human expertise and judgment.

While AI can automate repetitive tasks, extract valuable insights from vast amounts of data, and enhance the efficiency and scalability of ESG reporting, it is imperative to lean on providers that consider maintaining human oversight  and ensuring the reliability and accuracy of the generated information a serious priority, and not just an afterthought. 

ESG reporting (especially for compliance purposes, for example to report on the SFDR, EU Taxonomy) requires auditable and trustworthy data, and AI should be viewed as a tool that aids in achieving these goals. 

The future of ESG lies in the collaboration between AI and ESG professionals, leveraging the strengths of each to drive sustainability, transparency, and positive impact. By striking the right balance, we can harness the potential of AI while upholding the integrity and reliability that are essential for effective ESG reporting.

This is a core belief for us at Briink, and it’s something that we’ll address in more details in our upcoming webinar “Embedding AI in your ESG reporting strategy”, where we’ll look into the combination of customizable AI solutions and subject-matter expertise, through its application to concrete use cases. 

If you are interested in exploring our approach, as well as learning more about how our solutions can meet your needs, feel free to get in touch (or to send me an email at tomas@briink.com).

This post has been co-authored by Carla Nassisi, Growth Marketing Manager at Briink.