14.1.2021

Artificial Intelligence: Don’t Let a Biased Algorithm Ruin Your Business

The COVID-19 crisis and recent data breaches have highlighted the importance of trustworthy data processing. In its white paper on artificial intelligence, the European Commission also emphasised that trustworthiness is a prerequisite for the adoption of digital technology. Ethical AI is one of the EU’s key goals. Algorithmic biases are a particular risk that can undermine the trustworthiness of AI, and everyone developing or adopting AI applications should be prepared to deal with such biases.

AI is Biased and Can Lead to Discrimination

AI is biased by default, as its functioning is based on data and rules that are input and defined by people. This bias can arise for a number of reasons. The data used to train AI could be factually incorrect, incomplete or otherwise prone to cause algorithmic bias. The algorithm used by the AI could also have been developed to give weight to certain grounds for discrimination, such as gender or language. Biases are significant particularly when people are subject to an automated decision made by an AI application. If the bias is not corrected in time, such decisions can lead to discrimination.

A discriminatory AI could be created, for example, because the application has been trained primarily using data from men, and therefore, does not produce optimal results for women. Some of the specific features of many AI applications, such as opacity, complexity, unpredictability and partially autonomous behaviour, can be problematic from the perspective of the protection of fundamental rights.

Automated decision making and the risk of discrimination related to it have given rise to a great deal of debate. Decisions made by AI have even been appealed to the Non-Discrimination Ombudsman. One case was brought before the National Non-Discrimination and Equality Tribunal, which issued a decision and a conditional fine in 2018 prohibiting the use of the AI in question. 

The case concerned a credit institution, which had used a fully automated decision-making system. The system scored consumer credit applicants by comparing the applicant’s data to corresponding statistics to determine creditworthiness. The system was found to discriminate against applicants, among other things, based on gender and native language. It is worth noting that even small choices made by AI can be deemed decisions, such as the choice to direct a certain type of applicant to customer service.

AI Needs Supervision and Good Agreements Are Key

In order to prevent biases, the functioning of AI applications must be regularly monitored and evaluated, whether the application has been developed in-house or acquired from an outside provider. Because AI reflects the views and values or the people who developed it, acquiring AI applications from abroad involves its own risks that require caution. It is possible that the AI will display a bias that is based on the cultural norms of the developers.

As there is still little to no legislation specifically concerning AI, the rights and obligations of parties agreeing on AI applications are based on the agreements between the parties. Among other things, the contractual terms should provide for liability for defects and audits.  Agreeing on liability for defects is important, for example, in case the data used to train the AI is deficient or the algorithm used functions incorrectly. An auditing clause should provide for how the correct functioning of the AI is ensured if the decision logic of the AI is not sufficiently transparent. Though the field lacks actual legislation, the EU’s ethical guidelines for AI provide a framework for the responsible use of AI.

Latest references

We assisted Smarter Contracts Ltd in the process where the Finnish Transport and Communications Agency Traficom confirmed it to be an EU-recognised data intermediation service. Non-EU companies must have a legal representative in some EU country so that they can offer data intermediation services in accordance with the Data Governance Act. Smarter Contracts is based in Great Britain and selected Finland for the task. Smarter Contracts is the first non-EU data intermediation service registered by Traficom. Wayne Lloyd, Founder & CEO of Smarter Contracts, remarked:  The support from the Castrén team was exceptional from start to finish. Pioneering new territory is never without its challenges, and as the first non-EU data intermediation service provider, we faced significant legal uncertainties. Despite these complexities, the Castrén team expertly guided us through each step with remarkable efficiency, providing the certainty we needed. Smarter Contracts leverages its proprietary Pulse Permissions Protocol® to deliver advanced consent and access rights management services. This milestone highlights Castrén & Snellman’s proficiency in navigating intricate regulatory landscapes, whilst recognising the relevance of Smarter Contracts’ innovative approach to secure, compliant data management.
Case published 11.12.2024
We assisted Pharmaca Health Intelligence in its acquisition of Mediaattori Ltd’s PODIUM Connect® and PODIUM Visits businesses. Through the acquisition, Pharmaca Health Intelligence strengthens its extensive service offerings in medical information, data-driven management, and education for both healthcare and pharmaceutical companies. Pharmaca Health Intelligence is a pioneer in digital medical information and a reliable partner for wellbeing services counties, the private healthcare sector and pharmacies. The company invests in the development of technology and service solutions related to pharmaceutical information, also on an international scale.
Case published 5.12.2024
We are acting as the lead counsel to Fortum in a cross-border transaction in which Fortum is selling its recycling and waste business. The business is sold to thematic impact investing firm Summa Equity through its portfolio company NG Group. The debt-free purchase price is approximately EUR 800 million. The transaction is subject to authority approval and customary closing conditions. Fortum’s recycling and waste business to be sold comprises municipal and industrial waste management and end-to-end plastics, metals, ash, slag and hazardous waste treatment and recycling services. These businesses are located in Finland, Sweden, Denmark and Norway and currently employ approximately 900 employees.
Case published 18.7.2024
We advised Andritz Oy, a part of ANDRITZ group, with their acquisition of all the shares in Procemex Oy. The acquisition further strengthens ANDRITZ’s automation and digitalisation portfolio. Procemex is a global leader in integrated web monitoring and web inspection solutions for the pulp and paper industry. It has a team of more than 100 vision systems experts and has subsidiaries in Germany, Japan and the US. ANDRITZ offers a broad portfolio of innovative plants, equipment, systems, services and digital solutions for a wide range of industries and end markets. ANDRITZ is a global market leader in all four of its business areas – Pulp & Paper, Metals, Hydropower and Environment & Energy. The publicly listed group has around 30,000 employees and over 280 locations in more than 80 countries.
Case published 18.7.2024