12.2.2019

#10yearchallenge – The Many Faces of Facial Recognition

You probably noticed the #10yearchallenge meme, which encouraged people to post pictures of themselves today and ten years ago onto their social media accounts. I certainly enjoyed looking through my friends’ pictures, and was also inspired to compare my latest and first profile pictures to see how much fashion—and, yes, my own face, too—has changed over the past decade.

Soon after the start of the craze, people started posting about whether the meme was perhaps a data collection ploy started by Facebook in an effort to train its facial recognition algorithms to recognise the signs of aging. What easier way to gather the necessary data than to start an innocent meme to get users themselves to upload data into a large, organised database while entertaining themselves and their network. Facebook has denied these allegations and claimed that the phenomenon originated with its users and reminded users that they can switch off facial recognition at any time.

How is Facial Recognition Used?

There isn’t necessarily anything wrong with developing facial recognition technology — quite the opposite. If used correctly, it could change the world for the better, for example, by helping to diagnose illnesses early. Wired, an American magazine, recently published an article written by Kate O’Neill in which the writer – also inspired by the #10yearchallenge – brings up the pros and cons posed by facial recognition technology. According to her, facial recognition algorithms that recognise the signs of aging could be used, for example, to find missing children years later.

We encounter facial recognition technology every day, for example, when using our smartphones: a glance is enough to unlock your phone, many apps use facial recognition, and even your photo gallery often uses facial recognition to automatically group your pictures by person. Facial recognition can also make daily life easier: in Finland, we have piloted facial recognition to approve payments, and the technology is used in border checks to speed up the movement of people.

Technology is being developed for cars to monitor the condition of drivers to improve traffic safety. In the US, facial recognition has been used to help law enforcement to identify and arrest criminals. However, this kind of mass surveillance has been criticised, because if abused, it could lead to discrimination and threaten democracy.

Data collected by facial recognition technology that identifies the signs of aging combined with other data, such as location data, online behaviour or health data, could in future open the door to more extensive uses. For example, there are already insurance policies in which the policyholder agrees to hand over exercise and sleep data as well as health and lifestyle data to the insurer in exchange for discounted insurance premiums.

A Threat to Privacy?

It is clear that facial recognition technology has many good applications. However, if abused, it could become a threat to privacy. Facial recognition uses images of people’s faces, which in the data protection world are considered personal data. More specifically, they are classified as biometric data if an individual can be identified from the images or if they confirm the unique identification of an individual. Other biometric data includes fingerprint data used, for example, in access management.

Because facial recognition involves risks, there is an ongoing global debate about the necessity of regulation. In the EU, the use of facial recognition technology is regulated by the EU’s General Data Protection Regulation, which sets a framework for the processing of biometric data that is slightly stricter than the conditions for processing normal personal data. As with all personal data, it is important to consider the purpose for which facial recognition data is processed. If biometric data is used, for example, to uniquely identify a person, such data can only be processed with the express consent of the person, in cases provided by law or, for example, if people make the data public themselves — perhaps by publishing a profile picture on a public social media account.

As users of social media, it is important to think about what our data is being used for and whether it is being processed lawfully, but we should also remember to respect our own data and privacy, as O’Neill suggests in her article. Social media platforms have long been filled with apps and games mainly intended to collect data for later use, for example, in advertising. On the other hand, as long as we keep in mind what data we are sharing, who we are sharing it with and what purposes our data is being used for, we probably don’t need to start making tinfoil hats. Understanding the risks related to data processing puts us on safer footing.

Latest references

The Supreme Administrative Court (SAC) issued a significant precedent (decision KHO:2025:23) in a case in which it found that the Finnish Motor Insurers’ Centre (Liikennevakuutuskeskus, LVK) processed patient data in accordance with the requirements concerning fairness, data minimisation, and privacy by design and by default when deciding on compensation claims. We represented LVK in this case in which the SAC upheld the Administrative Court’s decision to repeal the EUR 52,000 administrative fine imposed on LVK by the Sanctions Board of the Office of the Data Protection Ombudsman. The SAC also confirmed the Administrative Court’s decision, which, as far as we know, was the first of its kind in Finland, ordering the Office of the Data Protection Ombudsman to reimburse some of our client’s legal costs. The decision bears great significance for the insurance industry as a whole. The crux of the matter were LVK’s information requests under the Motor Liability Insurance Act for patient data that were essential in determining insurance or compensation claims. In certain cases, making a decision may require extensive patient data. The Office of the Data Protection Ombudsman had found that LVK had systematically made overly broad information requests infringing Articles 5 and 25 of the GDPR and that the information should have been provided in the form of separate medical opinions. The Administrative Court repealed the Data Protection Ombudsman’s decision and found that patient records from medical appointments are, as a general rule, essential in establishing causality in compensation matters. It also stated that the tasks related to the consideration of compensation matters are specifically the core tasks of the insurance company and not of the controller of patient data. Furthermore, the Administrative Court found no evidence indicating that LVK would have systematically made overly broad information requests. ‘Once again, our collaboration with C&S was seamless throughout this extensive process, and we could trust that our case was in expert hands’, says Visa Kronbäck, Chief Legal Officer of the Insurance Centre. The full decision is available on the SAC website (in Finnish):  KHO:2025:23.
Case published 18.6.2025
We advised the Ilkka Paananen Foundation on a legal review relating to the use of a chatbot system utilising generative artificial intelligence.  The AI system provides conversational support to young people experiencing mental health issues and various life crises.  Our advice covered the AI Act, which regulates advanced AI systems, as well as data protection and other relevant local legislation.
Case published 16.6.2025
We advised Pihlajalinna Plc on an arrangement whereby Pihlajalinna Terveys Oy and Ikipihlaja Setälänpiha Oy sold their special housing services business to Esperi Care Oy.  The transaction involved three Pihlajalinna Uniikki units in Hämeenlinna, Lohja and Riihimäki as well as Ikipihlaja Oiva in Raisio. As a result of the arrangement, more than 100 employees transferred to Esperi. Pihlajalinna is one of Finland’s leading private providers of social and healthcare services, offering a wide range of services to both private and public sector clients. Pihlajalinna has more than 160 locations across Finland.
Case published 2.6.2025
We are proud to have provided legal assistance to PwC in the successful public tendering process for the comprehensive renewal of Kela’s benefits processing systems. Kela is the Social Insurance Institution of Finland, and this project is a significant cornerstone in modernising Finland’s social security infrastructure. PwC was selected as Kela’s strategic partner to implement a comprehensive overhaul of the benefits processing systems, digital services, customer relationship management, and information exchange platforms. The project aims to meet the demands of the future digital environment and enhance customer experience through the adoption of Salesforce technology. The new systems are expected to simplify benefit processes, enhance user experience for both customers, employees and other stakeholders, and ensure adaptability to future legislative changes. Castrén & Snellman provided strategic legal support to PwC throughout its successful bidding process, which was carried out through a competitive negotiated procedure. We extend our warmest congratulations to PwC for their successful bid and look forward to seeing the positive impact of this project on Finland’s social security system.
Case published 24.4.2025