14.1.2021

Artificial Intelligence: Don’t Let a Biased Algorithm Ruin Your Business

The COVID-19 crisis and recent data breaches have highlighted the importance of trustworthy data processing. In its white paper on artificial intelligence, the European Commission also emphasised that trustworthiness is a prerequisite for the adoption of digital technology. Ethical AI is one of the EU’s key goals. Algorithmic biases are a particular risk that can undermine the trustworthiness of AI, and everyone developing or adopting AI applications should be prepared to deal with such biases.

AI is Biased and Can Lead to Discrimination

AI is biased by default, as its functioning is based on data and rules that are input and defined by people. This bias can arise for a number of reasons. The data used to train AI could be factually incorrect, incomplete or otherwise prone to cause algorithmic bias. The algorithm used by the AI could also have been developed to give weight to certain grounds for discrimination, such as gender or language. Biases are significant particularly when people are subject to an automated decision made by an AI application. If the bias is not corrected in time, such decisions can lead to discrimination.

A discriminatory AI could be created, for example, because the application has been trained primarily using data from men, and therefore, does not produce optimal results for women. Some of the specific features of many AI applications, such as opacity, complexity, unpredictability and partially autonomous behaviour, can be problematic from the perspective of the protection of fundamental rights.

Automated decision making and the risk of discrimination related to it have given rise to a great deal of debate. Decisions made by AI have even been appealed to the Non-Discrimination Ombudsman. One case was brought before the National Non-Discrimination and Equality Tribunal, which issued a decision and a conditional fine in 2018 prohibiting the use of the AI in question. 

The case concerned a credit institution, which had used a fully automated decision-making system. The system scored consumer credit applicants by comparing the applicant’s data to corresponding statistics to determine creditworthiness. The system was found to discriminate against applicants, among other things, based on gender and native language. It is worth noting that even small choices made by AI can be deemed decisions, such as the choice to direct a certain type of applicant to customer service.

AI Needs Supervision and Good Agreements Are Key

In order to prevent biases, the functioning of AI applications must be regularly monitored and evaluated, whether the application has been developed in-house or acquired from an outside provider. Because AI reflects the views and values or the people who developed it, acquiring AI applications from abroad involves its own risks that require caution. It is possible that the AI will display a bias that is based on the cultural norms of the developers.

As there is still little to no legislation specifically concerning AI, the rights and obligations of parties agreeing on AI applications are based on the agreements between the parties. Among other things, the contractual terms should provide for liability for defects and audits.  Agreeing on liability for defects is important, for example, in case the data used to train the AI is deficient or the algorithm used functions incorrectly. An auditing clause should provide for how the correct functioning of the AI is ensured if the decision logic of the AI is not sufficiently transparent. Though the field lacks actual legislation, the EU’s ethical guidelines for AI provide a framework for the responsible use of AI.