Prepared by Dilara Şahin

Ed. by Eren Sözüer


New technologies tend to raise questions around ethical and political repercussions. These questions are reactions from society regarding that technology, and what it may bring. New improvements in technology challenge current beliefs and norms and states are often obligated to act and regulate those new and challenged norms. Because of this, the nature of AI technologies needs to be understood correctly. In order to have a discussion around responsibility for possible problems, this post will first focus on the ethical problems of AI technologies.

Problems Around Ethics of Artificial Intelligence

  1. The Problem of Transparency and Predictability: When artificial intelligence replaces human beings in areas that require human interaction, new questions regarding ethics arise. AI consists of algorithms and there must be principles put in place so that the system can work in a transparent way. For example, the security system of a building determines which person constitutes a danger. When AI systems are used for security, the algorithms determine certain people as dangerous and individuals labeled as “dangerous” should have a right to know why. Therefore, certain principles must be laid out in order to avoid any bias.

When we talk about AI taking over cognitive tasks, it is worth mentioning that human supervision will still take place. Accordingly, it is important that the technology is predictable for such supervision. This might mean that in order to achieve certain stability, technological advancements might have secondary importance as people would be relying on AI and being unpredictable would not help that.

  • Algorithms and The Problem of Manipulation:

What is an Algorithm?

  • An algorithm is an array of instructions designed in order to tell a computer how to alter a series of facts into methodical information. The algorithm takes the “data” which is the input, and turns it into knowledge that is understandable for human beings, sort of like a translation. This output can be used as an input for another algorithm or simply instructions for a machine or any system. This varies based on the intent of usage for the algorithm.
  • Algorithms are not general; they are aimed at achieving a certain task in a certain way. The structure of an algorithm must contain a system that searches, sorts, inserts, updates, deletes a certain item. So in order to make sure that the algorithm is doing its job correctly, the input and the output must be clear and the steps that will take us from point A to point B must be well-established. But that could be a danger in social situations where flexibility and having a sense of occasion is a must. There are so many elements to our everyday interactions, and algorithms may not be so good at predicting all of those. In addition, to make an algorithm consider all the elements leaves us in an even more vulnerable place.
  • This might also mean leaving the algorithm open to manipulation. For example, let’s assume an airport security has an X-ray for detecting dangerous materials or objects and its programmer designed a very “predictable” algorithm so that the X-ray is only looking for bombs and not guns. Therefore, there needs to be a balance: the algorithm needs to be specific enough to protect us from dangerous possibilities, yet broad enough to not leave us vulnerable to other dangerous possibilities.


  • Jory Denny, ‘What is an algorithm? How computers know what to do with data’ (The Conversation, 16 October 2020) Date of Acces 27 June 2021
  • Nick Bostrom and Eliezer Yudkowsky, ‘The Ethics Of Artificial Intelligence’, (2011), Cambridge Handbook of Artificial Intelligence