AI: the bad

4 minute read
""

The evolution of artificial intelligence (AI) in recent years has been labelled disturbing, frustrating and fascinating. The advancements have also been called biased and prejudiced. In this second instalment of my three-part blog series, ‘AI: the good, the bad and the ugly’, I’ll be sharing three thoughts on why the AI revolution might not be all it’s hyped up to be; ideas and considerations which ultimately propelled me to embark on the Executive Diploma in Artificial Intelligence for Business at Saïd Business School, University of Oxford.

1. Bias in AI systems

The plethora of evidence to support claims of bias in AI may be unnerving for many. It also raises the question of whether we should put our faith in AI systems and trust them to make crucial decisions for us. When looking at the large language models that are readily available to everyone with internet access, it is worth looking into some of the issues underlining the limitations and shortcomings of AI. The discussion has often been relatively lightweight, with a large portion of social and mainstream media coverage focusing on ChatGPT – currently the most popular AI system – and other generative systems. Examples can perhaps be frivolous, as in when the new Bing search with integrated ChatGPT was manipulated into stating a user had won the 2023 Turing award. There are countless such examples that are perhaps more amusing than cause for worry.

However, AI has also made severe, even life-altering mistakes, where bias caused unfair and disparate results for minority groups. An example of serious bias is COMPAS, a system used by US judges to help decide whether to release an offender by calculating the likelihood of the person committing another crime. An investigation by ProPublica 26 showed that COMPAS was biased against African Americans. Yet, it had been used in several cases within the judicial system, with judges even altering their sentences based on the score provided to them by COMPAS. No mention, however, was made of whether prior rulings were changed after this bias had been identified.

2. Problems with training data

The problem with COMPAS was not because of errors in system design, resulting in African Americans being misclassified as high-risk by the system, but rather the data used for training the system.

Delphi, a research prototype for a 'moral guide', is a freely available, popular AI system that reaches conclusions based on training data; a set of data developers refer to as the 'Commonsense Norm Bank'. Delphi gives the user its version of the 'moral average' answer based on the available data. However, not all data that the system has been trained with is from reviewed or reputable sources, with a considerable portion provided by crowdsourcing from, amongst others, Reddit. Reddit does not collect any personal data on its roughly 330 million users, so no verified information is available regarding the demographic makeup of the user base. According to a recent report, the demographic makeup of the Reddit community is not considered a representative sample of the population as the users are commonly thought more likely to be male and younger than the general population. It has also been pointed out that the AI system reaches its judgement based on training data, where not all situations are necessarily moral. There are also no guarantees that the most common or frequent views are the most accurate or ethical. All the systems such as Delphi can therefore do is reflect the status quo and non-evolving discourse.

3. Keeping up with change

Artificial intelligence is not only a mirror of the status quo, but of the status quo at a specific moment in time. It is only as good as the data it is trained on – without being continuously fed new data, it quickly becomes outdated. Take ChatGPT – a system mostly trained on data from before 2022, from sources such as literature, Wikipedia articles and other Internet materials. For general knowledge and fun reading, this is fine. The challenge comes when we consider changes to law, such as the overturn of Roe vs. Wade by the US Supreme Court in 2022.

For such life-changing or even life-threatening decisions, it isn’t easy to envisage how system developers can ensure the data is up to date, relevant, and fair for the user when employing such a globally popular tool. With good intentions, the developers could feed the system a large quantity of updated, unbiased and current data. However, there are obvious fundamental problems; for example, who decides on the changes worth updating and who determines the power of authority of such changes? Further, there must be some speculation on who describes and applies the ethics and the technical capabilities available to decide what information is relevant to each user. And is it up to the system developers to ensure the appropriate and correct information? There are also issues regarding accountability regarding incorrect data served to those who might suffer severe consequences if presented with it.

As of now, ChatGPT and other such systems leave several uncomfortable questions unanswered.

Oxford Executive Diploma in Artificial Intelligence for Business