Artificial intelligence and ethics

By Dr Savvas Hadjichristofi*

Today, artificial intelligence (AI), quietly and discreetly, assists us in every step of our daily lives. Smart devices direct us to our destination, while online services customize their content to be within our interests.

TV platforms choose for us the next film we will see, while intelligent “assistants” allow to improve the experience of interacting with our own home.

Soon, applications of AI will allow us to improve our lives, health and future living.

It is important, however, to emphasize that AI is fully based on the data from which it was trained. If the education data are against the concept of “morality”, then the way it works will also be “immoral”.

A typical example is Microsoft’s robot “Tay” in 2016.

“Tay” was an intelligent Twitter chat algorithm that could interact with users, chat, and learn from that communication. Its knowledge came solely from the knowledge of its interlocutors. As a result, some users, in just 17 hours, “taught” it to behave in a racist way. Soon, “Tay” developed into Hitler’s hymnwriter and conspiracy theorist.

The morality of AI must be compatible with the morality and laws of the society in which it is applied. For example, in the case of autonomous driving, alongside the “morality” associated with the behavior of the vehicle in cases of possible accidents, the question of assuming legal responsibility also arises.Is the transition of driving duties from man to machine, at the same time a transition of responsibility in the event of an accident? Who is held responsible? The engine, owner, manufacturer or programmer of the vehicle?

Fears about the morality of AI naturally stem from the way in which it is used. It is a super-tool, but one that can be used to change beliefs and decisions.

The case of the CambridgeAnalytica scandal is well known, where T.N. is accused of influencing the conscience of voters in the US, resulting in the alteration of the election result.

The importance of ethics in AI systems also seems to be identified by the European Union, which, for high-risk cases, such as health and policing, imposes transparency in training data. Authorities should be able to test and certify algorithms, while objective data is required to train the systems to ensure respect for fundamental rights and non-discrimination.

* Dr Savvas Hadjichristofis is Vice Rector for Research and Innovation, as well as Professor of Artificial Intelligence at Neapolis University in Pagos, Cyprus.

aiCambridgeAnalyticaethicsMicrosoft's robot "Tay"T.N.