Opportunities and challenges to improve accounting and accountability of Corporate Social Responsibility with artificial intelligence: insights from scientific mapping
ABSTRACT
This article applies bibliometric analysis as a framework to explore the potential applications of emerging artificial intelligence (AI) tools for enhancing the accounting and accountability of Corporate Social Responsibility (CSR). Bibliometric analysis mapped scientific trends and identified emerging and relevant topics related to AI, accounting/accountability, and CSR. The opportunities and challenges of AI in promoting transparency, informed decision-making, and stakeholder engagement are discussed in the context of thematic networks and strategic diagrams that illustrate the research findings. The results indicate that emerging AI technologies are still specialized and peripheral to the core research in CSR, but opportunities exist to improve CSR accounting by generating automatic and more reliable CSR reports with large language models (LLMs) and through the application of AI inspired by biological processes for optimal decision-making. Machine learning and deep learning, in turn, can be applied to understand stakeholder attitudes and for algorithmic auditing, thus promoting the engagement and responsibility of stakeholders and increasing the accountability of CSR. The study also discusses potential risks and biases that can arise when implementing AI technologies for the accounting/accountability of CSR.
Keywords: Artificial intelligence; Science mapping; Bibliometrics; Corporate social responsibility; Accounting; Accountability.
JEL classification: M14; O33.
Oportunidades y desafíos para mejorar con inteligencia artificial la contabilidad y los reportes de Responsabilidad Social Empresarial: perspectivas desde el mapeo científico
RESUMEN
Este artículo aplica bibliometría como marco analítico para explorar las potenciales aplicaciones de las herramientas emergentes de inteligencia artificial (IA) para mejorar la contabilidad y los reportes de responsabilidad social empresarial (RSE). El análisis bibliométrico mapeó tendencias científicas e identificó temas emergentes relacionados con la IA, la contabilidad/reportes de cuentas y la RSE. Las oportunidades y desafíos de la IA para promover la transparencia, la toma de decisiones informadas y la participación de las partes interesadas se analizan en el contexto de redes temáticas y diagramas estratégicos que ilustran los hallazgos de la investigación. Los resultados indican que las tecnologías de IA emergentes todavía son especializadas y periféricas a la investigación central en RSE, pero existen oportunidades para mejorar la contabilidad de la RSE generando informes de RSE automáticos y más confiables con modelos de lenguaje grandes (LLM) y mediante la aplicación de IA inspirada en procesos biológicos para una toma de decisiones óptima. El aprendizaje automático y el aprendizaje profundo, a su vez, se pueden aplicar para comprender las actitudes de las partes interesadas y para la auditoría algorítmica, promoviendo así el compromiso de las partes interesadas y aumentando la responsabilidad de la RSE. El estudio también analiza los posibles riesgos y sesgos que pueden surgir al implementar tecnologías de IA para la contabilidad/reportes de cuentas de la RSE.
Palabras clave: Inteligencia artificial; Mapeo científico; Bibliometría; Responsabilidad social empresarial; Contabilidad; Reportes de cuentasInteligencia artificial; Mapeo científico; Bibliometría; Responsabilidad social empresarial; Contabilidad; Reportes de cuentas.
Códigos JEL: M14; O33.
1. Introduction
Corporate Social Responsibility (CSR) is a vital aspect of contemporary business practices (Carroll, 2015; Coelho et al., 2023). Societal demands and environmental concerns continue to evolve, adding pressure to organizations for recognizing the need to balance their economic and profit pursuits against social and environmental responsibilities. The concept of CSR entails integrating ethical, social, and environmental considerations into the core business strategy and the governance of operations that aim to generate positive impacts on stakeholders and the broader society (Carroll, 2016; Coelho et al., 2023; Jenkins, 2009).
CSR has the potential to create long-term value for both society and organizations (Dawkins & Lewis, 2003). By addressing social and environmental issues, businesses can enhance their reputation, mitigate risks, attract, and retain talent, and contribute to sustainable development (Bouslah et al., 2023). However, despite the growing recognition of the importance CSR, many organizations face challenges in effectively implementing and maximizing its benefits (Boons & Lüdeke-Freund, 2013; Coelho et al., 2023). The practice of CSR encounters barriers and limitations that hinder its full potential (Etter et al., 2011). For example, in organizations that struggle to align CSR with their core business strategy, the stakeholder engagement and collaboration to account for CSR may be inadequate (Luyet et al., 2012; Mok et al., 2015), thus limiting the scope and impact of CSR activities. In addition, due to the lack of motivation among internal implementers or constraints related to external relationships, organizations experience inefficiency when carrying out CSR activities (Kazim et al., 2025). More importantly, new issues arise on the proper accounting and accountability of CSR due to the emerging technologies based on artificial intelligence (AI), as those based on Large Language Models (LLM), that are the cornerstone of technologies as ChatGPT (in their versions GPT-3.5 or GPT-4) or Google’s BARD. BARD is currently Google’s Gemini, a LLM with three versions: Nano, Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters).
Accounting and accountability are related concepts, but they refer to different aspects. Accounting focuses on the tracking and transparency of CSR actions. Accountability, in turn, has seeks to address the responsibility and consequences of CSR actions (Talpur et al., 2023; Lennard & Roberts, 2023). In an organization, accounting implies keeping track of the resources, data, and processes involved in CSR. Accountability of CSR, on the other hand, refers to holding individuals, organizations, or systems responsible for the actions and decisions of CSR within a company (Lennard & Roberts, 2023). This includes understanding who is responsible for the formulation, deployment, and outcomes of CSR. Establishing clear lines of responsibility and liability of CSR decisions is imperative for organizations to properly account for CSR. Accountability is hence crucial to ensure that the impacts and consequences of CSR actions can be attributed to the appropriate parties and that there are mechanisms for addressing the effects of CSR. As part of the ethical and responsible development and deployment of CSR, companies need to consider both accounting and accountability when disclosing and reporting CSR. Inconsistent or incomplete reporting practices of CSR will lead to skepticism and doubts regarding the authenticity of an organization's CSR claims (Amran et al., 2014).
Enhancing accounting and accountability is a critical aspect of effective CSR practices (Dubbink et al., 2008). Technologies as digital platforms and blockchains have been incorporated into organizations and have been previously used by companies to enhance the accounting and accountability of CSR practices (Ezzi et al., 2023; Kong & Liu, 2023; Benton et al., 2018). This article applies science mapping through bibliometric analysis as a framework for exploring strategies to enhance the accounting and accountability of CSR through the application of artificial intelligence. Previous studies have applied bibliometric analysis to evaluate the relationship between CSR and sustainability (Meseguer-Sánchez et al., 2021), CSR and supply chain management (Feng et al., 2017), CSR and corporate social performance (De Bakker et al., 2005), and CSR and sustainable development (Ye et al., 2020). Recently, bibliometric analysis was applied to analyze corporate sustainability reports in the context of the Global Reporting Initiative (Sahar and Aripin, 2023), and the relationship between CSR and leadership (Zhao et al., 2023a). However, to the best of our knowledge, no study has applied before bibliometric analysis as a framework to discuss the economic and technological opportunities—as well as the social and ethical implications—associated to the implementation of AI algorithms for the accounting and accountability of CSR.
Just as digital platforms and blockchains, emerging AI technologies—like generative AI based on LLMs and stable diffusion (Qu et al., 2023)—can be implemented in an organization to enhance the accounting and accountability of CSR. In terms of accounting of CSR, AI can help to automatically follow the inputs, outputs, processes, and outcomes of CSR actions implemented in a company. This is important for transparency and understanding of how CSR is part of the process of decision making. However, the potential bias in the AI algorithms can create discrimination against vulnerable groups of society, negatively affecting the CSR efforts, due to the impartiality of the programming employed in these instruments or the lack of actions taken to eliminate prejudice or misunderstanding of the information produced by AI algorithms. Thus, due to the dichotomy in the implementation of AI technologies, this document fills a research gap in the scientific literature about CSR, by discussing the opportunities and challenges posed by the new AI tools on accounting and accountability of CSR.
Insights and practical recommendations for implementing AI technologies to enhance the accounting and accountability of CSR are discussed for businesses interested in improving CSR practices in the era of artificial intelligence. Specifically, the results of the bibliometric analysis indicate, for example, that AI technologies as social chatbots based on LLMs and machine learning (ML) models can be applied to generate automatically—that is, without human supervision or with minimal human supervision—more accurate CSR reports, because LLMs and ML models are trained on historical data of a large number of results about the favorable and unfavorable impact of CSR efforts. Thus AI models based on LLMs, ML or even deep learning, can be applied to detect in near-to-real-time positive and negative impacts of CSR related to each one of the United Nation’s Sustainable Development Goals—hence improving accounting of CSR—, and at same time non-anthropomorphic AI tools can apply for algorithmic auditing to increase the credibility of CSR disclosure and enhance the engagement and responsibility of stakeholders, thus increasing the accountability of CSR. The findings of the bibliometric analysis also indicate that AI that emulates human intelligence—as machine learning and deep learning—can be used to analyze CSR data with sentiment analysis, while AI that emulates the intelligence of other biological processes—as swarm intelligence from artificial bee colonies or genetic algorithms—can be applied in CSR to aid the decision-making of managers in terms of the optimal combinations of economic, social, and environmental impacts of corporate actions. These results, jointly with the findings obtained with thematic networks and strategic diagrams, allow us to conclude that AI is still an emerging topic in CSR, since topics as neural networks are still specialized and peripherical to the core research in CSR; however, the emerging AI technologies can help to enhance CSR accounting and accountability if the social biases and ethical challenges of AI are properly addressed by organizations that implement these technologies.
The next section (Section 2) of this article provides a background about CSR, previous technologies such as digital platforms and blockchain, and the emergent AI technologies relevant to CSR. Section 3 describes the bibliometric methods that are applied to analyze the published scientific evidence about artificial intelligence, accounting/accountability, and CSR. Section 4 discusses concrete and specific AI tools to enhance accounting and accountability of CSR, in the context of the results of science mapping obtained with the bibliometric analysis. A discussion of the results and conclusions are provided for organizations interested in enhancing the accounting and accountability of their CSR practices in the context of the new and emerging AI technologies.
2. Background
2.1. Corporate Social Responsibility
Corporate Social Responsibility (CSR) is the voluntary actions implemented by organizations to address social, environmental, and ethical concerns in their business operations and interactions with stakeholders (Carroll, 2016; Dubbink et al., 2008). CSR goes beyond legal requirements and intends to create a positive impact on society and on the environment. The fundamental principle of CSR is the recognition that businesses have a responsibility to contribute to society by taking actions that promote social, economic and sustainable development (Székely & Knirsch, 2005).
CSR has gained increasing importance for organizations in today's competitive business landscape, particularly due to the new challenges and opportunities of the emerging AI technologies (Saveliev & Zhurenkov, 2021). CSR in not anymore a philanthropic endeavor but rather a strategic approach aimed to achieve long-term social and financial success (Khan et al., 2012). The importance of CSR can be attributed to numerous factors. First, societal expectations have progressed, and consumers, employees and investors are intending to conduct businesses to establish responsible behavior and contribute to the greater good (Bouslah et al., 2023). If Organizations fail to meet these expectations through effective social responsibility actions, they could lose the trust of stakeholders. (Claydon, 2011; García-Sánchez, 2021). Secondly, the emerging AI technologies—mainly those from machine learning, deep learning, and generative AI—have ethical implications and can produce biases that needed to be accounted by CSR in organizations worldwide, jointly with climate change, resource depletion and social inequality, which suggests that AI needs accounting and accountability as part of CSR actions, just as CSR needs AI to improve the accounting and accountability of the social and environmental impacts of corporate actions. Accounting these challenges into CSR practices is necessary for organizations to play a crucial role in addressing the goals of CSR in the next decades (Wang et al., 2016). CSR initiatives can also facilitate organizations in managing risks by proactively addressing social and environmental inequalities that may influence their operations (Barauskaite & Streimikiene, 2021).
The proper accounting and accountability of CSR can bring tangible benefits to organizations. In terms of accounting, auditing the development and implementation process of CSR helps to identify potential gaps and opportunities to enhance CSR (Maccarrone & Contri, 2021). Accountability of CSR, in turn, can augment the reputation of the organizations, nurture innovation and competitiveness as well as attract and retain top talent that properly understands who should be accountable for the results of CSR actions (Wang et al., 2016). One important challenge for accounting and accountability is the lack of alignment between the CSR initiatives and the goals of the core business strategy (Nguyen et al., 2021). When CSR is treated as a separate and isolated activity not linked to the goals of an organization, it may fail to properly consider the externalities produced by the corporate actions. Thus, it is important to integrate the accounting of the effects of financial and economic goals into CSR to ensure that the business strategy, decision-making processes, and daily operations are aligned with the organization's values, in order to avoid reputational consequences (Maccarrone & Contri, 2021; Stubbs & Higgins, 2018).
Another challenge for the accounting and accountability of CSR is the identification and prioritization of relevant CSR actions. Organizations need to identify the most significant issues that align with their industry, stakeholders, and organization values. Additionally, measuring the impact and effectiveness of CSR initiatives can be challenging, as it often involves intangible, unobserved short-term outcomes, or long-term consequences. Furthermore, stakeholders may not be aware of the CSR actions and hence properly explaining to them the social impact CSR requires time, resources, and effective communication with stakeholders. Lack of stakeholder involvement in CSR will result in disconnected initiatives that fail to address the actual needs and concerns of the affected communities (Dawkins & Lewis, 2003; Luyet et al., 2012; Mok et al., 2015).
Different technologies have been promoted and implemented to improve accounting and accountability of CSR. Two of these technologies are blockchains and digital reporting platforms (Chang et al., 2019). Blockchains enable a secure and transparent data sharing among stakeholders (Ezzi et al., 2023), as it provides an authenticated and immutable record of transactions, making it an ideal technology for promoting transparency and accountability in various domains, including CSR (Ndayizigamiye & Dube, 2019). The decentralized nature of blockchain ensures that information is shared across a network of participants, eliminating the need for a central authority to validate and verify transactions (Bollipelly et al., 2023). Resultantly, stakeholders can access realistically the information about CSR initiatives, including impact assessments, project details and financial transactions (Iansiti & Lakhani, 2017). Moreover, the blockchain's immutability certifies that once a data or transaction is recorded, it cannot be transformed or deleted. This feature adds an additional layer of transparency that can enable stakeholders to verify integrity and authenticity (Swan, 2015). By leveraging blockchain for transparency in CSR, organizations can address challenges such as inaccurate reporting and greenwashing, i.e. the practice of making false or exaggerated claims about environmental or social responsibility, thus ensuring that stakeholders can trust the authenticity of their claims (Benton et al., 2018).
Blockchain technology also played a crucial role in promoting accountability in CSR practices. By utilizing smart contracts, which are self-executing agreements coded on the blockchain, organizations can automate and enforce compliance with CSR standards and regulations. Smart contracts enable transparent and authenticated tracking of commitments, ensuring that organizations fulfill their obligations and stakeholders can hold them accountable (Ghadge et al., 2023; Powell et al., 2021). Additionally, blockchain's traceability feature enables organizations to track the supply chain of products and raw materials, ensuring ethical sourcing and promoting fair trade practices. By recording every transaction and movement of goods on the blockchain, organizations can ensure that their supply chains are, for example, free from child labor, exploitation, and environmental harm (Ebinger & Omondi, 2020; Francisco & Swanson, 2018). Furthermore, blockchain technology enables the creation of decentralized platforms and marketplaces for sustainable products and services. These platforms utilize blockchain's transparency and trust features to authenticate and validate the sustainability claims of products, allowing consumers to make informed choices and support socially responsible businesses (Narayan & Tidström, 2019).
Numerous initiatives and projects have already started utilizing blockchain technology for accountability in CSR. For example, Provenance, a blockchain-based platform, enables consumers to track the journey of products, from raw materials to finished goods, ensuring transparency and fair-trade practices (Choi & Ouyang, 2021). Another notable project is the World Food Programme's Building Blocks, which utilizes blockchain to deliver direct food assistance to refugees, ensuring transparent and efficient aid distribution (Ebinger & Omondi, 2020). In terms of digital reporting platforms, the CSR performance is disclosed through interactive websites, and stakeholders can engage in the CSR initiatives in a transparent manner (Cheng & Kung, 2016; Şimşek et al., 2022). Digital reporting platforms offer a wide range of multimedia features that enhance visual appeal and user experience. These platforms allow organizations to incorporate interactive elements such as videos, infographics, data visualizations, and animations, making the information more engaging and easier to comprehend (Guandalini, 2022). By presenting CSR data in a visually compelling way, organizations can effectively communicate their sustainability efforts. Digital platforms allow stakeholders to explore CSR reports at their own pace and focus on the area’s most relevant to their interests (Stubbs & Higgins, 2018). This flexibility and interactivity empower customers to access the information they seek and actively engage with the content in a transparent manner, with real-time updates and with the ability to track changes and progress over time. Thus, organizations can update CSR information and performance metrics on a regular basis, ensuring that stakeholders have access to the most up-to-date information (Poláková-Kersten et al., 2023). This dynamic nature of digital platforms promotes transparency and enables stakeholders to stay informed about the latest CSR initiatives and outcomes. Nestlé's Creating Shared Value platform, for example, provides interactive features such as animated infographics, videos, and case studies, enabling the consumers to explore Nestlé's sustainability initiatives in a visually compelling way (López & Monfort, 2017). Another example is Nike's interactive sustainability microsite, which showcases Nike's sustainability goals, progress, and initiatives through engaging multimedia content and interactive features (Tang & Higgins, 2022). Moreover, the Global Reporting Initiative's (GRI) Sustainability Disclosure Database provides a centralized platform for organizations to report their sustainability performance and enables stakeholders to access and compare sustainability data across companies (GRI, 2020). These examples show how new technologies, as those provided by generative artificial intelligence, are incorporated into organizations to enhance the accounting and accountability of CSR practices.
2.2. Emerging AI technologies relevant for CSR
Artificial Intelligence (AI) simulates human intellect with algorithms that are programmed to perform tasks that would require human intelligence. These tasks are related to learning, reasoning, problem-solving, perception, and language understanding. Based on AI technologies that replicate or augment human cognitive abilities, it is possible to automate processes, extract insights from data, and make optimal informed decisions. Vinuesa et al. (2020) considers that AI technologies have capabilities related to audio, visual, textual, and tactile perception and pattern recognition from data, decision-making (e.g., as in recommendation systems), prediction, and automatic knowledge extraction with tools such as data mining. The general definition of Vinuesa et al. (2020) implies that AI encompasses a large variety of subfields, including machine learning and deep learning, including more recent tools as interactive communication with social robots or chat bots based on LLM, and logical reasoning based on the development of theory development from given logical premises.
Emerging AI technologies relevant to CSR have been previously discussed by Suchacka (2020), Zhukova et al. (2020), Pai & Chandra (2022) and more recently by Camilleri (2023). These previous studies highlight that automation, partial mechanization, and mass implementation of solutions based on AI offer opportunities to improve communication methods in CSR, but at the same time the implementations of these AI solutions imply considering corporate digital responsibility as part of CSR (Suchacka, 2020). These AI technologies are part of the development of the digital economy and the new tools used in CSR (like blockchain, online platforms, and digital communications), which offer organizations the opportunity to increase productivity and promote technology enhancement (Zhukova et al., 2020). Despite the relevance of these AI technologies for CSR, Pai & Chandra (2022) argue that the adoption of emerging AI technologies in CSR is still limited due to the risk of AI perceived by stakeholders. Pai & Chandra (2022) add that factors influencing AI adoption are not necessarily related to technological competence or firm size, but rather to the financial strength of companies, the relative advantage that can be gained by implementing AI technology for CSR initiatives, the knowledge of decision makers about AI tools, and the potential support provided by governments (Pai & Chandra, 2022, p. 18). Finally, Camilleri (2023) argue that AI tools as machine learning and deep learning applied to CSR can optimize workflows and be an aid in making complex decisions faster and more accurately than humans, leading to increased efficiencies and more productivity. Camilleri (2023) however notes that these AI technologies can be disruptive, and in consequence actions of AI governance need to be implemented to exploit relevant AI systems in a protected, safe, and secure environment, especially due to the latest developments of LLMs based on Natural Language Processing (NLP).
The emergence of new generation Natural Language Processing (NLP) technologies, a branch of AI focused on identifying patterns in text data, for example, is a powerful tool that can generate and enhance CSR reports (Fan et al., 2022; Gutierrez-Bustamante & Espinosa-Leal, 2022). Other AI algorithms can further contribute to CSR reporting by analyzing large volumes of unstructured data from sources such as social media, news articles, and reports, enabling organizations to gain valuable insights into societal and stakeholder perceptions, emerging issues, and the impact of their CSR initiatives (Abu Zayyad et al., 2021). NLP based on AI algorithms can also analyze previous CSR reports to extract relevant information, such as sustainability goals, performance metrics, and initiatives. Machine learning algorithms for clustering can also be applied in datasets of CSR reports to identify patterns, areas of strength and areas that require improvement (Amran et al., 2014). AI-powered analysis can also assist in identifying gaps or inconsistencies in reporting, ensuring that organizations maintain accuracy and transparency in their CSR disclosures. Moreover, AI algorithms can detect and analyze sentiment within CSR reports, allowing organizations to gauge stakeholder perception and identify potential concerns or areas of high satisfaction.
Big data analytics also play a role in CSR as organizations with large data sets collected from multiple sources, including internal systems, external databases, social media platforms, and Internet of Things (IoT) devices, since the analysis of big data helps to gain better insights into the CSR performance (Lindell, 2020). Real-time data capture through sensors and IoT devices can further promote proactive decision-making and effective management of sustainability CSR practices, in real time (Sivarajah et al., 2017). In addition to having large datasets, to leverage this rich information organizations need to employ various techniques such as statistical analysis, machine learning, deep learning and data mining (Bibri & Krogstie, 2018). These techniques uncover hidden patterns, trends, and relationships within the data, providing organizations with valuable insights for improving their CSR performance. The analysis of big data also enables organizations to go beyond descriptive analytics and leverage predictive analytics to anticipate future trends and outcomes related to their CSR initiatives (Mikalef et al., 2019). Based on analyzing historical data and identifying patterns, organizations can develop predictive models to forecast the impact of their sustainability actions, identify potential risks, and optimize resource allocation (Bibri & Krogstie, 2018; Mikalef et al., 2019). Additionally, data visualization, through interactive dashboards, charts, and graphs, plays a critical role in presenting complex data in a clear and understandable manner for decision makers (Khalid & Zeebaree, 2021). This visual representation of CSR data enables organizations to communicate their performance to stakeholders in a transparent manner and facilitate data-driven discussions.
Virtual Reality (VR) and Augmented Reality (AR) based on AI algorithms are also technologies that have emerged as powerful tools for providing immersive experiences in CSR. VR allows users to enter a simulated environment where they can primarily experience CSR initiatives, while AR overlays digital content onto the real world, enabling interactive experiences (Lima et al., 2019). Moreover, VR and AR can be employed to showcase the environmental impact of different practices, enabling stakeholders to visualize the consequences of their adoptions and make informed decisions (Cheng et al., 2022; Cheng & Kung, 2016), since immersive experiences facilitate stakeholder engagement and support sustainable practices. Studies on the use of 360-degree virtual reality as a tool in communicating CSR campaigns found, for example, that telepresence and VR engagement were significant mediators in predicting willingness to promote social initiatives and advocate for CSR (Zhao et al., 2023b). On the other hand, however, the new and emergent AI technologies also imply challenges for the CSR of organizations. Mainly, these challenges are related to security and data privacy concerns, technical complexities, resource constraints and ethical implications in terms of the corporate social responsibility of the societal and environmental impact of the use of AI.
Generative AI is also an AI technology highly relevant for CSR. Generative AI is based on algorithms and large-language models (LLMs) pre-trained to identify patterns and relationships in large datasets (Ozdemir, 2023). After being trained, generative AI can create new and creative content, such as text, images, or music, which is very similar to the ones created by humans (McCoy et al., 2023). Generative AI models that create realistic images and videos of people, animals, objects, and scenes can be used to create virtual worlds, as those used in VR (Ratican et al, 2023). Generative AI models for text generation can be used to build chatbots or for writing content for documents and reports (Lu et al., 2023). Two of the most popular generative text technologies of AI are Bard and ChatGPT. Bard is an LLM from Google, and ChatGPT (in their versions GPT-3.5 or GPT-4) is a LLM developed by OpenAI, both models generate text, translate languages, write different kinds of creative content, and answer questions (Ozdemir, 2023). BARD is currently Google’s Gemini, a LLM with three versions: Nano, Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters).
Organizations can use generative AI for tasks automatization and the creation of personalized and tailored products and documents of CSR. As with other algorithms of AI, however, generative AI can be biased, and these biases have similar ethical and social impacts as those of AI based on AI algorithms for predictive modelling. Generative AI has the additional risk that can be used to create fake content that deceives people and spread misinformation. Since generative AI models are based on complex models with billions of parameters, these models and their results can be difficult to understand and explain for non-technical stakeholders. Hence, generative AI can affect CSR in a number of positive and negative ways. On the positive side, generative AI can be used by organizations for the generation of reports on CSR performance, the creation of marketing material for CSR campaigns, including more engaging CSR content that can help to raise awareness of CSR initiatives and attract more support, especially if AI is applied to identify areas where CSR efforts can be most effective. Moreover, generative AI can be used for the personalization of CSR outreach to different stakeholders, such as employees, customers, and investors. This can help to make CSR messages more relevant and engaging.
3. Methods
This section describes the bibliometric methods applied in this article for the identification and prioritization of relevant AI technologies to improve the accounting and accountability of CSR. Bibliometric analysis is a research method that is applied to study and analyze scientific literature. Bibliometric analysis provides a systematic and objective approach to evaluating various aspects of scholarly publications, such as research productivity, impact, influence, collaboration patterns and common topics. The goal of bibliometric analysis is to gain insights into the structure and dynamics of scientific research within a specific field or across multiple disciplines.
When it comes to finding relevant topics in scientific literature, bibliometric analysis offers several valuable benefits, as it helps to map research trends and identify emerging topics. In bibliometric analysis, the mapping of research trends is based on track the evolution of research topics over time on the basis of the production of published articles in peer-reviewed journals, considering co-citation and co-occurrence analysis of keywords, which provides insights into the historical development of a field in terms of the emergence and decline of specific areas of study. Moreover, bibliometric analysis enables assessing the interdisciplinarity of research topics by identifying publications that bridge multiple fields. This helps understand the cross-disciplinary nature of scientific literature and identifies areas of research with high impact and potential societal relevance. Two common metrics of bibliometric analysis are single-country publications (SCP) and multi-country publications (MCP). SCP are publications of researchers from institutions within a single specific country. MCP, on the other hand, refers to research that was produced through the collaboration between researchers from multiple countries.
In this study, the Bibliometrix package in R (Aria & Cuccurullo, 2017) was the tool used to analyze the records of research articles extracted from the records of Web of Science (WoS) and Scopus. The Bibliometrix package in R can be used for whole-process bibliometric analysis and visual display, including statistical analysis, data preprocessing, co-occurrence matrix construction, co-citation analysis, coupling analysis, co-word analysis and cluster analysis on documents from WoS and Scopus (Xie et al., 2020). WoS and Scopus are a comprehensive database of scholarly literature that provides access to records of multiple databases of academic journals, conference proceedings, and other documents in various academic disciplines. The citations of scholarly literature extracted from WoS and Scopus can be used to track and identify research papers about a specific topic of interest. WoS and Scopus have a built-in feature that allows researchers to export the records of scientific articles in BibTeX format, which can be later analyzed using the quantitative tools available in the Bibliometrix package in R.
Science mapping of research articles based on bibliometric analysis helps to discover key authors and influential papers, which leads to a better understanding of the intellectual structure of a field. This information uncovers the essential contributions of influential researchers, since highly cited papers are related in cluster indicative of influential and relevant research topics. In turn, the identification of emerging topics in bibliometric analysis is based on the analysis of the frequency of keywords and subject classifications in academic papers. The identification of emerging research topics helps to identify areas of high interest and potential new research directions.
In the bibliometric analysis, a co-occurrence matrix was computed to identify relevant topics that are low developed or are highly developed in relation to accounting/accountability, CSR, and AI. The co-occurrence matrix was calculated with a corpus of documents extracted from WoS and Scopus. A limitation of the study is that, even if the EBSCO database was consulted too, EBSCO does not allow users to download the obtained references in the BibTex (.bib) format required to analyze the references with bibliometric techniques. The bibliographic information file (.bib) was requested to EBSCO formally, but not answer was received yet from EBSCO at the moment of preparing this study.
The calculation of the co-occurrence matrix is based on the counting of the number of documents in which similar keywords appear, a concept called co-occurrence frequency. After calculating the co-occurrence matrix of keywords, similar topics related to CSR, accounting/accountability, and AI were grouped into clusters, by using the simple centers (SC) algorithm; see for details Coulter et al. (1998) and Cobo et al. (2011).
A network graph was constructed to visualize the results of the co-occurrence matrix and the clusters identified with the SC algorithm. This network graph is called a thematic network because displays in its center the core scientific research line (in our case CSR), surrounded by interconnected clusters of topics associated to the main theme of the network (accounting, accountability, and AI). The thematic network also includes spheres to highlight the relevance of each topic in the network. The size of these spheres is proportional to the number of scientific documents related to the most frequent keywords found in the co-occurrence analysis.
Besides a network graph, a strategic diagram was created to visualize and classify the research topics related to CSR, accounting/accountability, and AI. A strategic diagram is a two-dimensional figure that classifies research topics in four quadrants, on the basis of combinations of metrics of density and centrality (see an illustration in Figure 1).
Figure 1. Illustration of a strategic diagram used for the science mapping of CSR, accountability, and AI

Source: Own elaboration based on Muñoz-Leiva et al. (2012).
The development degree (density) in the strategic diagram is a metric calculated with the strength of internal ties among keywords and hence it reflects how developed is a scientific topic in a research field. Centrality, in turn, measures the strength of external ties between topics, and hence approximates the relevance of a specific topic in a research field; see for details, Callon et al. (1991).
The four quadrants of the strategic diagram reflect which research topics are well-developed and relevant (upper-right quadrant, high centrality/high density), which ones are well-developed but less relevant (upper-left quadrant, low centrality/high density), which topics are not fully developed and at the same time marginal topics (lower-left quadrant, low density/low centrality), and which topics are relevant but still not fully developed (lower-right quadrant high centrality/low density). As in the network graph, spheres are used in the strategic diagrams to show the importance of a topic. The spheres have a size proportional to the number of documents associated with a topic.
4. Results of scientific mapping
Figure 2 shows the historic trends of the number of scientific publications about accounting/accountability, corporate social responsibility (CSR), and artificial intelligence (AI) obtained from the WoS and Scopus records. The following keywords were used to extract the historical relevant articles about accounting/accountability, CSR, and AI, from WoS and Scopus:
String 1: "accounting" OR "accountability" AND "artificial intelligence"
String 2: "accounting" OR "accountability" AND "corporate social responsibility" AND "artificial intelligence"
When the keywords in the string 1 are used in WoS, a total of 541 documents are obtained from WoS, from which 464 (86%) are journal articles (406 documents), book chapters (2 documents), early access research documents (49 documents) and proceeding papers (6 documents), with the rest of the documents being editorial letters and reviews. The timespan of documents collected from WoS is 28 years, between 1995 and up to June of 2023. Since the year 1995, the annual growth rate of scientific publications about accounting/accountability and AI is 18%, with an exponential increasing trend since the year 2016 (Figure 2). This trend shows the increasing interest in accounting and accountability of AI technologies.
In Scopus, the search of String 1 produced 3012 documents, 1420 articles, 1366 conference papers, 51 conference reviews, and 175 reviews. The timespan of documents gathered from Scopus is 42 years, between 1981 and up to 2023. Since the year 1981, the annual growth rate of scientific publications about accounting/accountability and AI is 6.5%, with an exponential increasing trend since the year 2017 (Figure 2). In the case of String 2, only 21 documents are obtained about accounting/accountability, CSR, and AI, from ISI Web of Science (WoS); 20 are articles (97%) and 1 is a review document. In Scopus, 8 article are obtained, 2 of these articles are also in the list obtained from WoS. Thus, the combined dataset of Scopus and WoS, without duplicate entries, and considering only published articles and reviews, includes 27 documents. The low number of documents obtained when using String 2 is in line with the findings of Minkkinen et al. (2022), who, while searching academic databases, found only a small number of relevant papers that discuss the connection between AI and the environmental, social, and governance (ESG) criteria that outgrowth CSR. Due to the low number of articles discussing the link between ESG, CSR and AI, Minkkinen et al. (2022) argue that this topic is still an emerging area of study.
Figure 2. Number of scientific publications about accounting/accountability, corporate social responsibility (CSR), and artificial intelligence (AI)

The time span of the combined documents of WoS and Scopus obtained with String 2 is between 2018 and 2023, with an annual growth rate of 41%, which further indicates that new interdisciplinary research is arising across different scientific disciplines to approach, from multiple perspectives, the role of accounting and accountability of CSR given the emerging AI technologies. This finding indicates that while accounting and the accountability of AI technologies are not a new topic, accounting and accountability of CSR in the context of AI technologies are scientific topics that began to be investigated only recently (Figure 2).
Figure 3 show that China, USA, Norway, Spain, Sweden, and the UK are the countries with the most prolific authors doing research on accounting/accountability, corporate social responsibility (CSR), and artificial intelligence (AI). Most of this research is the result of multi-country collaborations (MCP), since the values of MCP are larger than those of single-country publications (SCP) in the top countries, thus indicating that multidisciplinary teams of researchers from multiple countries were collaborating on common projects related to accounting/accountability, CSR, and AI. Figure 4 provides additional details about the collaboration between countries (Figure 4, left) and authors (Figure 4, right). China, the main country in terms of publications, collaborated with authors from developing countries such as Pakistan and Vietnam, and authors from developed countries like Canada and Australia. The US, the second most important country in term of multi-country publications, collaborated with authors from several different countries worldwide, such as Germany, China, Denmark, the UK, and India. The density map of the collaboration between researchers (Figure 4, right) further indicates 3 clusters of collaboration between authors: the first cluster is formed by P. D'Cruz and S. Du (co-authors of D’Cruz et al., 2022), the second cluster is formed by A. Kumar, M.K. Lim, and L. Chung (co-authors in Zhan et al., 2021), and the third cluster of authors are P. Neidermeyer, P. Ohman, T. Rana, I. Samsten, M. Semenova, J. Svanberg, and T. Ardeshiri, co-authors of Svanberg et al. (2022).
Figure 3. Countries with the most prolific authors doing research on accounting/accountability, corporate social responsibility (CSR), and artificial intelligence (AI)

Figure 4. Collaboration between countries (up) and authors (down) working on accounting/accountability, corporate social responsibility (CSR), and artificial intelligence (AI)


Figure 5 shows the scientific mapping of accounting/accountability, CSR, and AI, obtained with the documents extracted from WoS and Scopus. The role of artificial intelligence in CSR is a topic that is increasing its relevance but has not been yet fully developed, as it is changing from being an emerging theme to a basic topic: AI appears in the bottom of the diagram of Figure 5, between the lower-left quadrant and the lower-right quadrant of traditional (basic) topics of businesses, specifically business orientation, management, and financial performance. Technical concepts of algorithms and models, such as neural-networks are more developed but are still specialized topics (upper-left quadrant of Figure 5) and are becoming an emerging topic in CSR jointly with the ethical challenges and the responsibility (accountability) of CSR (lower-left quadrant of Figure 5). The main topics related to accounting/accountability, CSR, and AI are related to its impact, governance, ownership, cost, and dimensions, which are well developed, while the legitimacy of emerging AI technologies in decision-making are still not fully developed but are considered a relevant theme that needs further research in the scientific literature about accounting/accountability, CSR, and AI.
Figure 5. Science mapping of accounting/accountability, CSR, and AI

Figure 6 provides additional information about the clusters of topics in accounting/accountability, CSR, and AI, on the basis of the keyword co-occurrences of all the documents analyzed. CSR is at the center of the five clusters of scientific research, mainly in terms of the impact and orientation of CSR, and its relationship with management and financial performance. Closely related and part still of the core cluster of corporate social responsibility are AI algorithms based on neural networks and AI inspired by biological processes, specifically bee colony algorithms for optimal decision-making. Other 4 clusters of research around CSR are related to the dimensions and the framework of CSR, the empirical evaluation of CSR, and the management and financial performance of organizations that implement CSR actions. Accounting and accountability of AI in the context of CSR appear in the sparse right cluster of Figure 6, in the form of scientific topics related to concepts of justice and legitimacy in the adoption of AI for the different dimensions of CSR, but specialized topics related to AI are still very marginal and not yet in the core of CSR research, since artificial intelligence appears in the far right side of Figure 6 as an outlier not yet fully consolidated into the main research topics related to CSR.
Figure 6. Clusters of topics in accounting/accountability, CSR, and AI (Keyword co-occurrences)

Table 1 summarizes the 10 most frequent articles cited in relation to the new trend of scientific research on accounting and accountability of CSR, in the context of AI: Shum et al. (2018), Govindan et al. (2019), Popescu & Banţa (2019), Sætra (2021), Zhan et al. (2021), Hongdao et al. (2019), Garvey et al. (2023), Roszkowska (2021), Minkkinen et al. (2022), and Dong et al. (2022). The study of Shum et al. (2018) is the most cited research article, with a total of 223 citations and a citation rate of 37.17 citations per year (CpY). Govindan et al. (2019) have 66 citations (13.20 CpY), Popescu & Banţa (2019) have 29 citations (5.80 Cpy), Sætra (2021) has 21 citations (7 CpY), Zhan et al. (2021) have 19 citations (6.33 CpY), Hongdao et al. (2019) have 17 citations (3.40 CpY), and Garvey et al. (2023) have already 13 citations (13 CpY), despite the study of Garvey et al. (2023) being very recent. The articles of Roszkowska (2021), Minkkinen et al. (2022), and Dong et al. (2022) have 12, 7 and 7 citations, respectively, with the article of Roszkowska (2021) having 4 CpY, and the articles of Minkkinen et al. (2022) and Dong et al. (2022) having 3.50 CpY each one.
Table 1. Top 10 highly cited articles
| Authors | Affiliation of corresponding author | Title | Year | Cites | Cites per Year |
|---|---|---|---|---|---|
| Shum, et al. | Microsoft Corp | From Eliza to XiaoIce: challenges and opportunities with social chatbots. | 2018 | 223 | 37.17 |
| Govindan, et al. | University of Southern Denmark | Designing a sustainable supply chain network integrated with vehicle routing: A comparison of hybrid swarm intelligence metaheuristics | 2019 | 66 | 13.20 |
| Popescu and and Banţa | University of Craiova | Performance evaluation of the implementation of the 2013/34/EU directive in Romania on the basis of corporate social responsibility reports | 2019 | 29 | 5.80 |
| Sætra | Ostfold University College | A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the SDGs | 2021 | 21 | 7.00 |
| Zhan et al. | University of Liverpool | The impact of sustainability on supplier selection: A behavioural study | 2021 | 19 | 6.33 |
| Hongdao et al. | Zhejiang University | Does what goes around really comes around? The mediating effect of CSR on the relationship between transformational leadership and employee’s job performance in law firms | 2019 | 17 | 3.40 |
| Garvey et al. | University of Kentucky | Bad news? Send an AI. Good news? Send a human | 2023 | 13 | 13.00 |
| Roszkowska | Hult International Business School | Fintech in financial reporting and audit for fraud prevention and safeguarding equity investments | 2021 | 12 | 4 |
| Minkkinen et al. | University of Turku | What about investors? ESG analyses as tools for ethics-based AI auditing | 2022 | 7 | 3.50 |
| Dong et al. | Shantou University | Can negative media coverage be positive? When negative news coverage improves firm financial performance | 2022 | 7 | 3.50 |
Shum et al. (2018) make a review of conversational systems based on artificial intelligence. Shum et al. (2018) highlight that one of the fundamental challenges in AI is endowing the machine with the ability to converse with humans using natural language. In their study, they review the characteristics of Eliza (the first chitchat bot) and early chatbots starting in 1966, Parry (a conversational system that passed the Turing test in 1972), the Alice system of 1995, the Defense Advanced Research Projects Agency (DARPA) communicator program of 2000, Siri from Apple in 2011, and the social chatbot XioaIce from 2014 (Shum et al., 2018). The study of Shum et al. (2018) is relevant for CSR because social chatbots can collect data and recognize the interests and intent of relevant stakeholders, due to the current rich contextual information to train conversational AI systems. Sentiment analysis techniques of AI can be applied to the data obtained from social chatbots to provide valuable insights into stakeholder sentiment towards specific initiatives or sustainability goals, helping organizations align their efforts with stakeholder expectations (Srivastava et al., 2021; Galiano-Coronil et al., 2024).
Conversational systems, as the chatbots analyzed in Shum et al. (2018) and the new tools of generative AI based on Large Language Models (LLMs), can play a significant role in enhancing the accounting and accountability of CSR. In terms of accounting, AI systems of text-generation can be applied for the automation of the streamline of CSR-related processes that generate CSR reports and disclosures, ensuring that these documents are accurate, consistent, and up-to-date. These AI systems can also track and report on key performance indicators (KPIs) related to CSR goals and objectives, and they can alert management when KPIs deviate from targets, enabling timely corrective action.
In relation to accountability, conversational systems can enhance transparency by providing stakeholders with real-time information on CSR efforts. By answering inquiries and providing real-time updates, conversational systems can encourage stakeholder participation and engagement in CSR programs. These systems can also facilitate two-way communication, allowing stakeholders to voice concerns and receive responses from the organization.
Conversational systems can also offer training and guidance to employees regarding the responsibilities of the different CSR policies, procedures, and best practices, and can help managers in making ethical decisions related to CSR by providing guidance and access to relevant frameworks, regulations, and guidelines. These AI systems can also identify opportunities for improvement in CSR practices through data analysis of the feedback from CSR initiatives, obtained from customers, employees, and sustainability reports, which can help to fine tune the CSR efforts within an organization.
Govindan et al. (2019)—the second most cited study found in the bibliometric analysis—propose a mathematical model for a distribution network of sustainable supply chains (SSCs). The traditional design of supply chains is focused only on the minimization of fixed and operating costs, but the design of SSCs seeks to find the supply chain configuration that maximizes economic, environmental, and social performance, hence minimizing the negative environmental and social impacts on different stakeholders, including company owners, workers, consumers, and society. In Govindan et al. (2019), the optimization of the SSC network is based on three artificial swarm intelligence algorithms: particle swarm optimization, electromagnetism mechanism, and an artificial bee colony. These metaheuristics are compared against genetic algorithms by Govindan et al. (2019).
In the context of accounting/accountability, CSR, and AI, the study of Govindan et al. (2019) highlights that not only the artificial intelligence that tries to emulate human intelligence is relevant for CSR. On the contrary, artificial intelligence inspired by self-organizing biological processes and structures is also relevant for the accounting and accountability of CSR. In terms of accounting, AI inspired by self-organizing biological processes can help to track the divergence from the optimal combination of sustainability metrics, since the metaheuristics of AI inspired by biological processes are focused on optimization in large search spaces as those of multiple regulations that must be met to properly achieve CSR. In relation to accountability, AI inspired by biological processes can guide the ethical practices and responsible sourcing in SSCs as those of Govindan et al. (2019), by tracking and checking the optimal or near-optimal CSR situation at every stage of a sustainable supply chain.
The article by Popescu & Banța (2019)—the third more cited article—discusses the compliance with CSR based on non-financial information of 680 companies in Romania. Popescu & Banța (2019) do not deal with the role of AI on the accounting and accountability of CSR but do highlight the decisive role of intellectual capital for innovation, entrepreneurship, knowledge flows, creativity, and technological development. In contrast, in the next most cited article, Sætra (2021), argues that the concept of ESG (environment, social, governance), is increasingly replacing the term CSR, but—according to Sætra (2021)—none of these frameworks sufficiently capture the nature of the sustainability related impacts of AI. Sætra (2021) suggests that accounting of ESG/CSR in the context of AI can be accomplished within the framework of the Sustainable Development Goals (SDGs) of the United Nations. In the accounting framework suggested by Sætra (2021), positive and negative impacts are presented side by side for each of the 17 SDGs, at micro level (workers), meso level (company impacts), and macro level (at societal level). The accounting framework of Sætra (2021), can also be useful for the accountability of CSR in the context of AI, as it can be applied to identify the responsibility of the benefits and downsides of each CSR action in the context of the SDGs. For example, Kaab et al. (2019) show that AI can be used to increase the effective utilization of resources, and make waste management more effective, leading to more sustainable businesses and economic activities, in line with circular economic principles.
Zhan et al. (2021) evaluate managers’ evaluations of suppliers and the consequent selection of a supplier based on economic, social, and environmental sustainability dimensions. According to the results Zhan et al. (2021), managers have favorable preferences on the sustainability strategies of suppliers’, but at the same managers tend to downplay the role of social and environmental dimensions compared to the economic dimension, which is the fundamental dimension considered by managers. Zhan et al. (2021) conclude that it is necessary for suppliers to offer additional information regarding their social and environmental efforts, since this strategy will lower uncertainties and risks that managers confront during in supplier selection. In this sense, AI models of decision-making can analyze and suggest strategies to account for and promote social and environmental dimensions, mixed with an optimal combination of the economic dimension, which will help companies interested in choosing a sustainable supplier for their operations.
Hongdao et al. (2019) address the association between transformational leadership, CSR, and job performance. Transformational leaders are the ones that motivate subordinates to achieve excellence through high individual job performance and higher level of organization commitment. Hongdao et al. (2019) argue that transformational leadership affects job performance directly and indirectly through CSR, since the individual perception of employees about CSR affects their commitment and their individual responsibilities, due to the influence of CSR on employee emotions, working behavior, and job attitude. The results of Hongdao et al. (2019) indicate that the accountability of CSR can be enhanced if AI is used to properly inform workers about the social and environmental impact of their jobs, as workers will feel that they play an individual and crucial role in the collective CSR actions of a company.
Garvey et al. (2023) found that that AI is perceived to have weaker intentions compared with humans. That is, consumers infer that AI lacks both selfish and benevolent intentions, thereby dampening the extremity of consumer responses. Garvey et al. (2023) suggest that anthropomorphizing AI agents may help to strengthen perceived intentions, since and AI agent that appears more humanlike are perceived to have stronger intentions. The results of Garvey et al. (2023) suggest that if AI is used in a company to enhance accounting and accountability of CSR actions, this AI should be less humanlike to avoid the lack of trust on AI technologies among stakeholders.
In the article of Roszkowska (2021), she explores the audit-related causes of financial statement fraud based on case studies of Enron and Arthur Andersen. Roszkowska (2021) concludes that—instead of additional regulations—technological advancements as artificial intelligence based on machine learning—like neural networks and deep leaning—can solve reporting and audit-related problems by enhancing the reliability of financial statements. Specifically, Roszkowska (2021) highlights that machine learning models that automatically detect anomalous actions and operations can reduce human efforts and cost, and at the same time the AI models can improve accuracy. The implementation of this type of algorithmic auditing will allow auditors to focus on supervising the outcomes of AI and evaluate if the identified irregularities are in fact related to a fraud or not. Another advantage, according to Roszkowska (2021), is that AI is trained on big data that contains the aggregated experience of large number of historical audits, which eliminates human errors of single auditors with limited experience. In terms of accountability, Roszkowska (2021) indicates that AI-based solutions of algorithmic auditing can connecting the reporting outcomes with its originators to either eliminate an error or uncover and report a fraudulent behavior.
Minkkinen et al. (2022) also discuss algorithmic AI auditing, specifically in the context of the responsible use of AI to account for ESG and CSR. Concretely, Minkkinen et al. (2022) argue that ethics-based AI auditing has to be framed on principles of transparency, fairness, lawfulness, non-maleficence, responsibility, and privacy. In terms of accounting, the results of Minkkinen et al. (2022) emphasize the that the impact of AI—including its risks—should be included as part of the ESG evaluations, even as an additional dimension or indicator of ESG. Accountability of AI in the context of CSR, on the other hand, needs to be framed at company level, according to Minkkinen et al. (2022), because the AI developers have more knowledge about these tools and their intricacies, compared for example to the superficial knowledge that investors and other stakeholders may have. These accountability strategies based on clear fair rules for AI developers will help to avoid irresponsible actions that disregard ethical principles.
Finally, Dong et al. (2022) evaluated the impact of negative news of the financial performance of 1579 companies in Shanghai and Shenzhen stock exchanges, from 2013 to 2019. The negative news about companies—unethical behavior and fraud—were collected, sorted, and analyzed by the artificial intelligence algorithm of the Financial News Database of Chinese Listed Companies (CFND). Dong et al. (2022) do not discuss the accounting nor the accountability of AI in the context of CSR, but Dong et al. (2022) found that negative news of CSR collected by AI have an inverted U-shaped effect on firm performance, since a small number of negative news increase firm performance by increasing investors’ awareness of the firm and its accessibility, but when the number of negative news is large, the amount of evidence available for investors also increases and through the accumulation of negative news a stock price crash risk arises. As a robustness test, Dong et al. (2022) also made a contrast between negative news about unethical behavior and negative news about fraud, and they obtained larger effect sizes in the regressions that exclude frauds (Dong et al. 2022, Tables 5 and 6). This last finding shows that while AI can increase transparency about the situation of companies, it is important to consider the ethical aspects of AI algorithms, since the ethical implications of AI are perceived by investors as a relevant issue in the accounting and accountability of CSR actions.
5. Discussion
The results of the bibliometric analysis applied as a framework to obtain insights from scientific mapping indicate that uprising AI technologies are still specialized and peripherical to the core research in CSR, however, the accounting of CSR can be enhanced if AI technologies as social chatbots based on Large Language Models (LLMs) or machine learning are applied to automatically generate more accurate CSR reports, framed on the United Nation’s Sustainable Development Goals, hence solving financial reporting and audit-related problems by reducing human error and enhancing the reliability of the information in financial statements through algorithmic auditing. The results also indicate that the accountability of CSR can be improved if AI is applied to connect the reporting outcome with its originator—thus limiting irresponsible actors from reaping excess benefits by disregarding ethical considerations—and if transformative leadership and non-anthropomorphic AI tools increase the credibility of CSR disclosure and promote the engagement and responsibility of stakeholders. The findings of the bibliometric analysis also suggest that AI that emulates human intelligence—as machine learning and deep learning—can be used for the sentiment analysis of CSR data, and that AI that emulates the intelligence of other biological processes—as swarm intelligence based on artificial bee colonies or genetic algorithms—can be applied in CSR to support data-driven decision-making that properly accounts for the optimal combination of economic, environmental, and social impacts that maximize CSR and ESG efforts.
Nonetheless, as noted by one of the external reviewers of this study, the documents that were found to be the most-cited in the scientometric analysis tend to highlight AI’s positive aspects and opportunities for the accounting and accountability of CSR, but the risks and ethical challenges of implementing AI algorithms are not sufficiently considered and discussed by the authors of the most cited documents. Greenwashing and data privacy issues are for example two of the potential negative impacts of generative AI for CSR not discussed by the most cited authors, besides the consequences of the potential biases of the IA outputs. Generative AI can be applied for greenwashing if the organizations misuse these AI technologies to create fake CSR content that makes companies look more responsible than they actually are. Since generative AI algorithms are trained on large amounts of data, privacy concerns may arise when trained the generative the AI models. Additionally, Wach et al. (2023) identified specific risks associated with ChatGPT (versions GPT-3.5 or GPT-4), due to the lack of regulation of the AI market. These risks are the potential generation of disinformation, deepfake content, algorithmic bias, and social surveillance, among other consequences. Ho & Nguyen (2023) suggested that the elimination of misinformation, disinformation, and damaging contents within an organization can be promoted by rewarding practices and products that strengthen the integrity of CSR practices.
Biases in the output of AI recommendation systems are also a critical challenge in the implementation of AI for accounting and accountability of CSR. Ntoutsi et al. (2020) defines this bias as “the inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair” (Ntoutsi et al., 2020, p. 3). These AI bias can cause unfairness and discrimination against subgroups of vulnerable individuals, due to—for example—gender, ethnicity, religion, age, physical appearance (weight), disability, sexual orientation, economic status, and personal ideologies. Yao & Huang (2017) highlighted that these biases arise in AI due to observation bias and imbalanced data. Observation bias implies that an AI model produces predictions only similar to the data that was used to train the model, and thus novel cases cannot be predicted. Cowgill & Tucker (2020) call bias in algorithmic predictions this source of algorithmic unfairness, due to unrepresentative training samples, mislabeling of outcomes in training samples, coding/programming bias, and algorithmic feedback loops. The bias from imbalance data in turn occurs when systematic societal, historical, or other ambient biases are present in the data due to low number of observations for vulnerable groups, and hence the AI systems inherit the biases that exists in the data, which cause the AI system to produce unfair recommendations (Milano et al., 2020). Harrer (2023) also recently note the existence of these biases in generative AI applications such as LLM, because these generative AI systems have been trained as well with biased content and fair content from the internet, and thus there exist a risk that these systems can create and spread misinformation or harmful and inaccurate content. Moreover, Harrer (2023) adds that the probabilistic nature of generative AI poses a reliability and reproducibility problem that can only be solved by human oversight of model operation.
In light of the potential bias and unfairness that can arise when implementing AI technologies, Gupta (2021) encourages businesses to consider measuring and reporting algorithmic fairness as part of their code of conduct and CSR efforts. Accounting and accountability of AI in the context of CSR can also be promoted, according to Gupta (2021), if managers applying AI-based recommendations for decision making consider increasing awareness among employees about the possibility of inherent biases in AI algorithms. Accordingly, a deeper understanding of the AI technology implemented for the accounting and accountability of CSR could be promoted if these technologies are implemented by multidisciplinary teams, formed in both accounting and information technology. From a technical perspective, Ntoutsi et al. (2020) provide a survey of specific statistical techniques for mitigating the bias in AI algorithms—namely, balancing data during the preprocessing stage, regularizing or changing the target during the training stage of the algorithms, and applying post-processing methods like a wrapping of a fair classifier on top of a black-box AI classifier (Agarwal et al., 2018). Ntoutsi et al. (2020) consider as well that it is important to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in the design, training, and deployment of AI-systems in order to ensure social good while still benefiting from the huge potential of AI technology. More recently, Camilleri (2023) analyzed the ethical considerations and implications of AI for social responsibility, and concluded that a responsible AI governance framework—besides considering transparency, explainability, interpretability, fairness and inclusiveness—needs to take actions to improve the robustness in the implementation of AI in order to reduce the chances of threats produced by information leakages, privacy violations due to data breaches, and contamination and poisoning of data sets by malicious actors. Including these actions as part of the accounting of AI in CSR reports will ensure privacy and safety for stakeholders and will improve the accountability of developers of automated AI systems.
6. Conclusion
Corporate Social Responsibility (CSR) plays a vital role in the current competitive business era, offering both business advantages and societal benefits. However, implementing effective CSR practices can be challenging due to numerous factors, including lack of alignment, stakeholder engagement, and identification of relevant issues. To enhance the impact of CSR, organizations prioritize accounting and accountability through robust disclosure and reporting practices, monitoring and evaluation systems, and external verification. Leveraging the emerging AI technologies can further enhance CSR by promoting transparency, informed decision-making, and engagement of stakeholders.
The results of scientific mapping based on bibliometric analysis indicate that AI technologies are still an emerging topic not yet fully developed, but at the same time provide opportunities to enhance the accounting and accountability of CSR. In terms of accounting, AI systems of text-generation can be applied for the automation of CSR reports and disclosures, thus improving transparency. Transparent reporting builds trust, establishes commitment, and permits stakeholders to assess the organization's social, ethical and environmental impacts in their organizations (Dubbink et al., 2008). In relation to accountability, the results show that sentiment analysis techniques of AI can be applied to the obtain valuable insights into stakeholder attitudes towards CSR initiatives and sustainability goals, helping organizations to align their efforts with stakeholder expectations. The findings also highlight the importance of AI inspired by self-organizing biological processes to track the divergence from the optimal combination of sustainability metrics and to support the optimal decision making of managers in terms of economic, social, and environmental governance factors. The results of the bibliometric analysis also indicate that the accounting of CSR can be based on maps of positive and negative impacts (at macro, meso and micro level) in each one of the SDGs. Accounting and accountability of CSR can be promoted as well by implementing algorithmic automation and auditing with less humanlike AI technologies, since anthropomorphic AI that answers stakeholder concerns may create a lack of trust in relation to the CSR practices.
Based on these findings, it can be concluded that the emerging AI tools as those based on LLM are powerful tools for improving CSR, with possibilities yet to be explored and exploited. AI algorithms that automate the analysis of CSR reports, extract valuable information and insights related to sustainability goals, promote stakeholder sentiment, and monitor performance metrics to prompt data-driven decisions, can certainly enhance CSR strategies. However, when implementing AI technologies, organizations need to address challenges such as data privacy, technical complexities, resource constraints, digital gaps, and—crucially—algorithmic biases. The challenges posed by the new AI technologies can be addressed by developing technological and organizational mitigation strategies, including nurturing digital literacy and inclusivity, implementing robust data protection measures, and building internal capacity through the formation of multidisciplinary teams in charge of implementing and explaining the AI algorithms (Ng et al., 2021; Şimşek et al., 2022). In relation to the identification and mitigation of algorithmic biases, Harrer (2023) suggested that generative AI technologies should be companions to humans augmenting but not replacing their role in decision making, knowledge retrieval and other cognitive processes.
Strategies for promoting the transparency and fairness in the implementation of AI—such as including indicators of algorithmic fairness and explainability/interpretability as part of the reporting of CSR efforts (Gupta, 2021)—is also a key mitigation strategy to reduce the potential biases that can be generated by emerging AI technologies. Organizations that adopt transparent reporting practices in relation to their use of AI, disclose relevant information of their AI implementation and its fairness, and communicate their progress and outcomes to stakeholders, will build trust, foster stakeholder engagement, and enhance the organization's reputation. Accounting of AI algorithms can also be promoted, according to Ntoutsi et al. (2020) via bias-aware data collection and by improving explainability and interpretability of AI algorithms, with explainability referring to explaining in human terms the internal mechanics of AI systems, and interpretability referring to the ability to predict what will happen given a change in the model input or parameters. This transparency in explainability and interpretability of AI algorithms can be added to the transparent reporting of CSR that has been guided by recognized frameworks and standards, such as the Global Reporting Initiative (GRI) or the Sustainability Accounting Standards Board (SASB), to ensure consistency and comparability of information (Dubbink et al., 2008). In the case of AI, the High-Level Expert Group of the European Commission established ethical guidelines for trustworthy AI that are relevant for the accountability and disclosure of CSR in multiple organizations (Smuha, 2019). According to the Guidelines, trustworthy AI should be lawful (respecting all applicable laws and regulations), ethical (respecting ethical principles and values), and robust, from a technical perspective and considering its social environment. The guidelines put forward also 7 key requirements that AI technologies should follow: human agency and oversight (AI systems should empower human beings, allowing them to make informed decision), technical robustness and safety, privacy and data governance, transparency (AI technologies should be explained in a manner adapted to the stakeholder concerned), diversity, non-discrimination and fairness, and—with particular relevance to CSR—AI technologies should promote societal and environmental well-being, in the sense of being sustainable and environmentally friendly. Robust monitoring and evaluation systems of the impact of AI technologies on CSR will in turn enhance accountability and auditability (Hira & Busumtwi-Sam, 2021). Monitoring involves regularly collecting data and information on the progress, performance, and outcomes of AI technologies that impact CSR activities. Evaluation, on the other hand, assesses the effectiveness and impact of these initiatives. By implementing monitoring and evaluation systems, organizations can track the implementation of AI technologies that affect CSR initiatives, identifying areas for improvement, and make informed decisions based on evidence (Stahl et al., 2023).
Regular review and analysis of AI data and the algorithms will help organizations measure their progress, identify trends, and demonstrate continuous improvement in the implications of the implementation of AI technologies to improve accounting and accountability of CSR practices ( Minkkinen et al., 2022; Stahl et al., 2023). External verification and certification, in turn, are additional measures to enhance transparency, as an independent third-party verification can provide an objective assessment of an organization's CSR performance and validate its claims. Verification can be conducted by external auditors, sustainability consultants, or specialized certification bodies (Kirst et al., 2021). All in all, reporting practices in relation to the implementation of AI technologies and CSR initiatives related to AI, based on the disclosure of relevant information and external verification and certification, will improve the credibility of the CSR accounting and accountability based on AI technologies (Benton et al., 2018; Dubbink et al., 2008; Ebinger & Omondi, 2020; Martínez-Ríos et al., 2020). By embracing AI technologies that enhance accounting and accountability of CSR practices, framed on transparency and mitigation strategies to reduce algorithmic biases and promoting sustainable development, companies can create a positive impact on society and the environment, contribute positively to their communities, respect human rights, and foster a sustainable and inclusive future.