AI trends: what are the ethical threats of using artificial intelligence?

The ethics of AI has been discussed for many years. In 2021, the EU and China released their guidelines to ensure the ethical development and implementation of artificial intelligence systems. The sweeping introduction of large language models has further drawn attention to AI ethics.

Where the use of AI is ethical and where it is not

It is unfair to say that there are entire industries where using artificial intelligence is unethical. There are fundamental factors behind the ethical risks of using artificial intelligence, which are based on threats to fundamental human rights. Therefore, there are areas where, depending on the application, AI may pose more significant risks and require restrictions on the depth of application and additional controls.

There are several levels of depth of AI application. On the surface, there are not very important decisions, such as making an appointment with a doctor. These are decisions that we are already ready to delegate to artificial intelligence.

More complex decisions can be made with prior human supervision. For example, a citizenship application: artificial intelligence can aggregate and analyse the applicant’s personal data and prepare a preliminary conclusion. But the final decision will be made by a civil servant.

The next level is when artificial intelligence gains autonomy and needs only post-control from humans. Here, the depth of AI application depends on the industry. This can be migration control, justice (initially administrative, for example, parking fines), decisions on hiring a person, etc. Here, there are high risks of unethical use of artificial intelligence.

“Depending on the development of technology and society and the reliability of controls, we may revise the acceptable levels of AI depth for different tasks and industries.”

Certain applications of AI, such as remote biometric identification – technologies for identifying a person’s face, walk, posture, etc. – will be high-risk, as they may violate human rights or threaten human safety and health.

What are the risks of using big language models

The main risks that arise with the emergence of tools such as ChatGPT can be divided into groups.

The first group includes risks to personal data and privacy. First of all, language models can make a lot of generalisations, combinations of information and conclusions from the data that may not be available to the person analysing the information. For example, from datasets about a person’s behaviour on social media and the Internet in general, the same queries to chatbots by large language models, you can draw deep conclusions about their mental state, habits, daily schedule, etc. The combination of large data sets and their analysis by AI creates additional privacy threats that are not obvious to users, as these data do not pose a threat individually.

“And if earlier this was a matter of the medium-term future, the quality of the dossiers that can already be obtained through accessible chatbots about rather non-public people is disturbingly deep compared to the results of traditional searches.”

The next group is the risk of segregation. Language models may favour certain social groups depending on how they are trained and based on what data. They will work better with these groups and pay less attention to others and marginalise them, exclude certain minorities (ethnic or otherwise) and reproduce human prejudices against these minorities.

The third group of risks is the inaccuracy and falsity of the information. This includes, for example, incorrect medical or legal assistance. Language models are not very good at distinguishing fact from fiction but simultaneously give the impression that they are reliable interlocutors. This can create many threats.

“There are also security risks: language models are very convenient for mass disinformation, and personalised disinformation with a human face, targeting specific groups of people, for example, through commenting on social media and in the media.”

It can also be a personalised fraud, where a person receives a message tailored to social engineering techniques that are aimed at them, exploiting individual weaknesses and prejudices – this will work better. Of course, there are also risks related to intellectual property infringement and plagiarism.

Language models can also be used to help develop computer viruses or weapons, where artificial intelligence will replace specific expertise that people acting with malicious intentions lack.

“Another group of risks is the threat to human rights posed by using artificial intelligence by government agencies. First, it is the ability to process huge amounts of data, drawing generalisations and conclusions about individuals. It is the ability to recognise and therefore track people both physically and de-anonymise their online behaviour.”

This gives government agencies powerful tools to interfere with privacy. And as I mentioned above, the indiscriminate use of AI can lead to unlawful or erroneous decisions regarding citizens – imposing sanctions, denying access to services, credit, employment, etc. And, worst of all, the motivation for such decisions will be impossible to explain and, therefore, impossible to challenge. Thus, introducing AI tools in high-risk applications and industries should be cautious and gradual.

And finally, I would like to remind you of the well-known provocative opinion of Nick Bostrom, a Swedish philosopher, researcher, Oxford professor, and founder of the Institute for the Future of Humanity and the Institute for Ethics and New Technologies: AI can not destroy humanity with the help of any weapon, but seize control over it through manipulation, knowing perfectly how the human brain works and having the entire arsenal of methods that can be used to shape public opinion in the right direction.

He was talking about broad artificial intelligence capable of self-awareness, but we cannot discount the risk that the next generations of large language models will try to use them for similar purposes in totalitarian societies, at least in some.

Material from Speka.

Share