Expert comment
6 min read

AI is not just a technology. It has become a stakeholder

AI innovation continues to unlock significant value creation for organisations. A recent article in the Financial Times points to the power of AI to ‘help deliver progress towards 134 of the SDG targets by enabling innovations in areas from sustainable food production to better access to health, clear water and renewable energy. The article goes on to mention the potential for AI to be deployed in accelerated research initiatives through the use of digital twins, and the power of AI to deliver climate intervention through analytics-at-scale satellite imagery. 

But alongside new innovations, entrepreneurial progress and profitable outcomes comes the risk that we are unleashing the darker side of AI. For example, concerns are growing about the ability of generative AI to 'hallucinate' - creating and spreading misinformation by ‘predicting’ responses rather than delivering factual answers to submitted user queries. A recent paper co-authored by former Oxford University Centre for Corporate Reputation (Centre) post-doctoral scholar Tim Hannigan coins the term ‘botshit’ for this new phenomenon, linking four modes of chatbot work (authenticated, autonomous, automated and augmented) with four types of ‘botshit’ related risk (ignorance, miscalibration, routinisation and black boxing).

 

As AI continues to shape organisational practices and organisational behaviours, we therefore need to think carefully about what AI should do as well as what AI is able to do.

The examples above show how AI can be deployed for good as well as for bad. Research undertaken by the Centre includes a current exploration of the dimensions of trust in AI in fintech, as well as published work on the reputational dimensions of AI failures. As AI continues to shape organisational practices ('capabilities') and organisational behaviours ('character'), we therefore need to think carefully about what AI should do as well as what AI is able to do. Governance of this important new tool will be key. 

AI is mainly referred to in academic literature, policy forums, newspapers and public discourse as a technology. But that does not mean to presume that it should be treated as morally or operationally neutral. And it also underplays – and wrongly positions in my view – AI as solely a technology. I therefore argue that AI - especially generative AI - should now be treated as a stakeholder, not simply a technology.  

After all, generative AI models create content in the same way as journalists create content, politicians make speeches, think tanks write policy reports, and designers create images or editors create videos. Why should a content creating AI be treated any differently to other content creating stakeholders? Organisations have sophisticated engagement strategies for stakeholders - should they not do the same for AI? And if that is to happen, what could that look like? Engagement with developers to help shape the way they write their algorithms? Curating and supplying training sets for responsible AI? 

There are strong arguments as to why AI cannot be conceptualised as a stakeholder. Foremost among these is the argument put forward by Oxford’s Professor Carissa Veliz (and others) that to be a stakeholder you need to have interests, agency and the ability to be held accountable for the content that is created. In her excellent paper ‘Moral zombies: why algorithms are not moral agents’, Professor Veliz argues that algorithms are ‘incoherent as moral agents because they lack the necessary moral understanding to be morally responsible’. Following this argument through, this means that AI cannot be conceptualised as a stakeholder. Managers should therefore restrict themselves to governing AI as a normatively neutral technology.

There is, however, another view. In recent work, Oxford Professor of Law and Finance Alan Morrison and Oxford University Centre for Corporate Reputation International Research Fellows Rita Mota (ESADE) and William J. Wilhelm (University of Virginia) draw on Yale philosopher Stephen Darwall’s insights in their recent paper to argue that corporations can in fact be argued to possess moral agency. Their argument is that organisations do have interests, that they do have agency and that they can be held accountable for their actions. In this, they argue that moral agency is not something that rests at the individual level, but at the relational level, anchored on the idea that moral agency comes from the organisation’s interactions with others. For this to be possible, three conditions need to be satisfied: 

  • first, that the actions of an organisation can elicit reactive attitudes (like love, resentment, anger, or indignation)
  • second, that these are able to be communicated to and understood by the organisation as a whole (via what the authors call a second-personal address)
  • third, that the organisation is capable of recognising the moral authority of second-personal addresses and of responding in a way that recognises that authority (the authors call this the authority condition). 

This seems entirely plausible. Take Boeing’s actions around its (multiple) Max 737 crises. Boeing’s actions following the two fatal crashes were met with anger from many stakeholders, which the organisation recognised as having moral authority and which moderated their response strategies. 

 

In reading this paper, it becomes easy to see the parallels between corporations and generative AI. Generative AI can elicit emotional responses and can be engaged with in a dialogue. It is therefore possible in my view to consider generative AI as a sort of moral agent, which then opens it up to being engaged with as a stakeholder. Conceptualising AI as a stakeholder also provides us with the ability to exercise our own agency – as individuals, or as corporations – to construct a strategy of engagement with the aim of influencing or shaping the outputs being generated. 

""

That leads us to the question of engagement with whom? It seems important to correctly identify the persons, functions or organisations involved in the creation or use of generative AI.  Organisations might consider therefore establishing stakeholder engagement plans in three distinct areas, as follows: 

  1. Algorithm developers: algorithms are created directly by humans, or by algorithms created by humans to write new algorithms. In either event, human programming ingenuity lies at the heart of what algorithms are able to do. Corporations, or individuals, can seek to engage with these people in advance of what is created, or iteratively as the process of writing the algorithm is undertaken. This process is analogous to the work done to influence, inform and shape policy development, and would focus on the technical design elements in play.  
  2. Algorithm users: almost all organisations today deploy algorithms across their operations, whether that be banking (for the identification of customer risk), retail (for the analysis of customer habits or preferences) or big tech (to identify profitable new market opportunities). Engaging with the corporations deploying algorithms is a second strategy that could be adopted and would naturally focus on governance themes around how the algorithm is used.
  3. Algorithm data sets: algorithms are trained using data sets, and these data sets have a significant influence on the way that the algorithm develops its outputs over time. A third stakeholder engagement strategy should seek to identify or provide appropriate training sets for new algorithms or seek to persuade and/or inform developers of the merits/demerits of particular data set use. 

Stakeholder engagement lies at the core of corporate strategy. This article argues that AI – particularly generative AI – has now become an important tool for organisations and that its ability to create information and shape opinions requires it to be conceptualised as a stakeholder rather than as a technology. In doing so, organisations and their leadership teams will be able to regain some important and much needed agency over the powerful emerging phenomenon that is generative AI. 

(The images in this piece were generated by an AI image generator)