Will AI help or hurt the credibility of ESG efforts?
If the last few years have seen companies tool up to improve their environmental, social, and governance (ESG) reporting, the latest revolution in artificial intelligence raises the question of whether AI can help or hinder ESG efforts.
As a may public cloud provider that touts its Sustainability Cloud for helping businesses aggregate emissions data for sustainability reporting, Microsoft has been pushing the advantages of generative AI for more accurate and insightful ESG reporting.
EY’s director of the Microsoft Alliance for Europe, the Middle East and Africa, Mauro Ricardo Pontes, recently wrote that the key advantage of using generative AI for ESG reporting is the ability to handle large amounts of data.
“Sustainability reporting requires analyzing data from diverse sources, such as energy consumption, waste management, employee well-being, and community engagement,” he wrote.
“Generative AI algorithms can efficiently collect, process, and analyze this data, identifying patterns and extracting meaningful insights. This capability enables companies to gain a holistic view of their sustainability performance, identifying areas for improvement and aligning their practices with industry benchmarks and regulatory requirements.”
Every major vendor, from Salesforce to SAP, is incorporating GenAI into its software products and ESG reporting tools are already drawing on AI. But the University of Auckland Business School’s Professor Charl de Villiers says GenAI content raises concerns about the accuracy and integrity of sustainability reporting.
Inadvertent bias
In the paper How will AI text generation and processing impact sustainability reporting? Co-authored with Professor De Villiers and co-authors Matteo Molinari (University of Kent) and Ruth Dimes (The Open University), the researchers point to bias, transparency and accountability as the key issues arising from the use of AI in sustainability reporting.
“AI systems learn from existing data, which may carry social and cultural biases, and these biases can inadvertently be embedded in AI-generated sustainability reports, perpetuating existing inequalities and overlooking important perspectives,” says De Villiers.
"While AI can significantly streamline the reporting process, we also need to be aware of the risks of greenwashing and the replication of biased content."
When it comes to greenwashing, says De Villiers, ChatGPT and other GenAI systems are trained on vast datasets that can contain unverified and factually inaccurate information about sustainability efforts. It report authors turn to GenAI to flesh out their ESG claims they may just regurgitate factually inaccurate claims about their own company’s ESG efforts.
New Zealand last year became one of the first countries to require emissions and climate risk reporting by its largest companies, overseen by the Financial markets Authority. From this year, companies will start delivering detailed sustainability reports on an annual basis, so the task of assembling the data and collating reports is becoming very real for them.
ESG reporting has also become highly valued by investors who are looking for progress in these areas before they will commit funding.
"Investors rely on the accuracy of these reports to make informed decisions. If AI-generated reports are not critically assessed, we risk misleading stakeholders about a company's true environmental and social impact," says De Villiers.
Still, he says GenAI has great potential to improve the accessibility of non-financial information useful in ESG reporting.
De Villiers concludes: "The key, for now, is to balance AI's efficiencies with a critical, human-centric approach".