Prepared by Elif Ünsal

Ed. by Eren Sözüer

Artificial intelligence (AI) systems and in particular machine learning (ML) are progressing day by day. To understand a problem in AI methods, first its causes be explained, and these causes should be clearly demonstrated with concrete examples. Finally, ways of improvement and amendatory solutions should be created. This post will explore the causes of bias, along with their explanations.

Artificial Intelligence is constitutively based on mimicking the human brain. It is a system that is purported to think and act rationally on its own. What it basically does is combining computation with datasets for problem solving. AI technologies can have some skills such as learning, understanding conversations and producing content; depending on the information fed to them. However, there are still some certain skills that AI cannot achieve as these skills are specialized for human intelligence and they originate from human imagination. For example, AI cannot justify its decisions and they cannot evaluate the legal disputes the way jurists do. 

Deep learning and machine learning are common artificial intelligent methods. Both are subfields of artificial intelligence and deep learning is actually a sub-field of machine learning.[i]

Deep learning uses larger datasets and needs less human interference in comparison to machine learning. Developers can create hierarchy between data, but this is not obligatory because in machine learning, algorithms can “understand” which information would become prominent, and matter more according to the evaluation of all the data. This detail is the point where machine learning exhibits its difference because non-deep learning methods need more developer. They need labelled data and conversely deep learning, which can label data by seeing the patterns.  

Machine learning creates the possibility of analysing and classifying vast amounts of data in seconds. It provides opportunities to save time and work with various data. However, ML has disadvantages like maintaining existing bias or reinforcing the norms that should be questioned.  Regarding the need for data, it can violate the rights to privacy and data protection, which are protected under Article 8 of the European Convention on Human Rights (ECHR). It also may have transparency issues depending on the way it has chosen to be used because the data to be given is up to the developers. Users (judges, lawyers, police), the public and especially individuals whom ML makes predictions about need to “understand how the tools’ predictions are reached and make reasonable decisions based on these predictions.”[ii]Having knowledge of the decision-making process of ML enables courts to audit the implementation of risk assessment instruments in practice. Also, individuals can take cognizance of what kind of tools make predictions about them, so they can decide whether they trust such tools or not. If stakeholders and developers transparently explain how they developed such an algorithm, jurists or defendants can understand the results of ML by knowing it’s methods. But they can object to it by depending on their intellectual property rights and they can choose not to share the information of how they developed their algorithms.

How is ML used in law?

ML is mostly used for researching or analysing the publicly available literature, like a search engine.  ML helps lawyers to see various data (different types of cases) at the same time by analysing existing data. Lawyers can select or classify these cases according to their needs. So, those who use such software conduct it themselves.

General categories of use can be classified as:

● “Advanced case-law search engines 

● Online dispute resolution  

● Assistance in drafting deeds  

● Analysis (predictive, scales) 

● Categorisation of contracts according to different criteria and detection of divergent or incompatible contractual clauses  

● ‘Chatbots’ to inform litigants or support them in their legal proceedings”[iii]

Where do the legal systems face bias in ML?

Primarily, predictive mechanisms should be mentioned because there are multiple areas in law in which ML is used. For example, estimation of where crime is most likely to occur, facial recognition software to identify possible suspects, or in some systems, it can even directly affect court verdicts or probation decisions. Also, importantly, ML is used for predicting a defendant’s future risk for misconduct. These tools are called Risk Assessment Instruments (RAIs). These predictions inform high-stake judicial decisions, such as whether to incarcerate an individual before their trial. To exemplify there is an RAI in the USA, named Public Safety Assessment (PSA)[iv] and it works with the criteria which only contains individuals age and records of arrests, convictions, sentences to scale individuals’ risk of not finalizing a successful pretrial period. After the evaluation PSA can come to three conclusions: criminal arrests at the pretrial period, arresting for violent criminal offence or attempt. Lastly, charging with Fail to Appear in court. In the end PSA rates the individual’s risk. Results of PSA is for courts to take into consideration when they give their decision of release, but it has no obligatory and negative effect on the court’s verdict directly.[v]

During the use of these tools, bias can occur from different sources. For instance, it can be because of the training data. Training data is the input which is fed to an algorithm for learning; so, with ML that algorithm makes predictions according to the training data. In the case of providing wrong or inadequate data as an input for an algorithm to learn, biased results can occur. Bias might also result from the designer’s biases directly because the designer can choose to give wrong, inadequate, biased data.  Another source of bias is predictions of ML when it has its ground on historical outcomes.[vi] There can be convictions about a group of people depending on their crime record in the past, as “they are inclined to commit a type of crime.”  The fact that black Americans’ usage of marijuana has higher rates than white Americans, leads to a conviction for ML. Actually, usage of marijuana does not depend on race but ML cannot differ the risk which comes from historical fact from the actual risk of that individual.[vii]

Accuracy can be related to the size of learning data as well: a large training dataset leads to less errors, and less data leads to worse predictions. With larger data sets, it is possible to find out which information is correct or implementable to the situation by making compartments. 

This is considered unfair, since different groups of the population get different prediction error rates, which has a negative effect on minorities.[viii]Minority groups are considered small groups, hence their effect is negligible in the algorithm and that leads up to misclassifications and biases.  For example, when developers feed photos of people to ML and ask for their gender, a problem emerges: ML’s accuracy is higher for men with light skin than women with dark skin. The main reason is representation; ML does not have the same amount of data for different groups of people and the quantity of data for small groups is lower. So, they are not represented equally. In addition to this, detecting bias is another issue. While it affects minority groups negatively, it is still highly accurate because for the majority it can predict and classify fairly. So, on the overall performance inaccurate results of minority groups can be neglected at the end of the day.

In risk assessment instruments the criteria for classifying individuals are the data that have been fed to them. It can be age, gender, ethical origin, race, colour, socioeconomic status, places that people live or work, sexual orientation etc. These are the expressions which create a person’s character and identify that person and discrimination based on these expressions is prohibited in most of the legal systems (gender, race etc).

As mentioned before, if training data contains developer bias, if the algorithms been fed with wrong or inadequate data or there is historical bias; these might be classified as explicit bias. But also, ML can have implicit bias while using these expressions as criteria to analyse the data. Because there is a possibility of having bias even though developers avoid it by removing the information which entails bias. ML can find patterns, relations between different information about an individual. The reason for that is “many characteristics of an individual provide weak indication of their age, called proxy variables.”[ix] Due to the correlation between erased information of individuals and proxy variables ML can reach the ungiven data. In conclusion biased results can emerge even if it hasn’t been given to ML.

To exemplify, when developers want ML to analyse candidates for a job it has to abide by the prohibition of discrimination (Article 14 of the ECHR) based on gender, age, race, religion etc. To prevent biased results, developers can remove such characteristics (information about one’s gender, race, age) and use the rest of the necessary data such as postal code, former jobs, education, physical appearances, and interests. However, the same outcomes would occur because the rest of the data is linked to the removed data. [x]

Using biased algorithms violates fundamental rights in many ways: 

  • The effects on minority groups are disproportionate.
  • By reinforcing the current bias in the society, it threatens many rights such as the right to a fair trial and prohibition of discrimination.
  • It may impede the process of solving the social injustices by reinforcing the convictions or prejudices of society. 
  • Personalization of punishment may be disabled due to discrimination based on classified definitions.
  • Trust in law and liberty and security of individuals may be badly influenced. 

[i]IBM Cloud Education, ‘Artificial Intelligence’ (IBM Cloud Learn Hub, 3 June 2020) Date of Access 29 June 2021

[ii]PAI Stuff, ‘Report on Algorithmic Risk Assessment Tools in the U.S Criminal Justice System’ (2019) The Partnership on AI <https://partnershiponai.org/wp-content/uploads/2021/08/Report-on-Algorithmic-Risk-Assessment-Tools.pdf > Date of Access 4 August 2021

[iii]CEPEJ, ‘European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment’ (2018) Council of Europe <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c> Date of Access 29 June 2021

[iv] Alex Chohlas-Wood, ‘Understanding risk assessment instruments in criminal justice’ (The Brookings Institution’s AIET Initiative, June 19 2020) Date of Access 29 June 2021

[v] Advancing Pretrial Policy & Research, ‘About the Public Safety Assessment’ (APPR) Date of Access 30 August 2021

[vi] Chohlas-Wood  (n 4)

[vii] ibid

[viii] Panel for the Future of Science and Technology, ‘Understanding algorithmic decision-making:  Opportunities and challenges’ (2019) EPRS <https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf> Date of Access 30 June 2021

[ix] George Čevora, ‘How Discrimination occurs in Data Analytics and Machine Learning: Proxy Variables’ (Towards Data Science, 5 February 2020) Date of Access 4 August 2021

[x] George Čevora, (n 9)