Environmental, Social, and Governance (ESG) scores are quantitative assessments of companies' commitment to sustainability that have become extremely popular tools in the financial industry. However, transparency in the ESG assessment process is still far from being achieved. In fact there is no full disclosure on how the ratings are computed. As a matter of fact, rating agencies determine ESG ratings (as a function of the E, S and G scores) through proprietary models which public knowledge is limited to what the data provider effectively chooses to disclose, that, in many cases, is restricted only to the main ideas and essential principles of the procedure. The goal of this work is to exploit machine learning techniques to shed light on the ESG ratings issuance process. In particular, we focus on the Refinitiv data provider, widely used both from practitioners and from academics, and we consider white-box and black-box mathematical models to reconstruct the E, S, and G ratings' assessment model. The results show that it is possible to replicate the underlying assessment process with a satisfying level of accuracy, shedding light on the proprietary models employed by the data provider. However, there is evidence of persisting unlearnable noise that even more complex models cannot eliminate. Finally, we consider some interpretability instruments to identify the most important factors explaining the ESG ratings.

ESG ratings explainability through machine learning techniques

Marazzina, D;Stocco, D
2023-01-01

Abstract

Environmental, Social, and Governance (ESG) scores are quantitative assessments of companies' commitment to sustainability that have become extremely popular tools in the financial industry. However, transparency in the ESG assessment process is still far from being achieved. In fact there is no full disclosure on how the ratings are computed. As a matter of fact, rating agencies determine ESG ratings (as a function of the E, S and G scores) through proprietary models which public knowledge is limited to what the data provider effectively chooses to disclose, that, in many cases, is restricted only to the main ideas and essential principles of the procedure. The goal of this work is to exploit machine learning techniques to shed light on the ESG ratings issuance process. In particular, we focus on the Refinitiv data provider, widely used both from practitioners and from academics, and we consider white-box and black-box mathematical models to reconstruct the E, S, and G ratings' assessment model. The results show that it is possible to replicate the underlying assessment process with a satisfying level of accuracy, shedding light on the proprietary models employed by the data provider. However, there is evidence of persisting unlearnable noise that even more complex models cannot eliminate. Finally, we consider some interpretability instruments to identify the most important factors explaining the ESG ratings.
2023
ESG ratings
Corporate social responsibility
Machine learning
Model explainability
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1248338
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact