
This post assumes that readers are aware of the good things that Artificial Intelligence (AI) can bring to our businesses, societies and lives. And also about the most evident challenges that the massive uptake of this technology implies such as bias & unwanted discrimination, lack of algorithmic explainability, automation & the future of work, privacy, liability of self-learning autonomous systems to mention some of them. In this post, I will focus on bias and & unwanted discrimination, and in particular on supervised machine learning algorithms.
The intrinsic objective of machine learning.
Before entering into the matter, we should not forget that the intrinsic objective of machine learning is to discriminate; it is all about finding those customers that have an intention to leave, finding those X-Rays that manifest cancer, finding those photos that contain faces etc. etc. However, what is not allowed in this process, is to base those patterns (a collection of certain attributes) on attributes forbidden by law. In Europe, those attributes are defined in the General Data Protection Regulation (GDPR) and include racial or ethnic origin; political opinions; religious beliefs; membership of trade unions; physical or mental health; sexual life; criminal offenses. In the US, the following characteristics are protected under the US federal anti-discrimination law: Race, Religion, National origin, Age, Sex, Pregnancy, Familial status, Disability status, Veteran status, Genetic information.
Different sources of unwanted discrimination by algorithms.
As much research already has pointed out, in Machine Learning there are different sources of unwanted discrimination by algorithms, which may lead to discriminatory decision making.
- Discrimination due to bias in the data set because of an unbalanced distribution of so-called protected groups (represented by sensitive variables such as race, ethical origin, religion, etc, as mentioned above).
- Discrimination due to the availability of sensitive variables in the data set, or their proxies: apparently harmless variables that exhibit a high correlation with sensitive variables.
- Discrimination due to the algorithm manifested by the fact that the proportion of false positives and/or false negatives in the outcome is not equal across protected groups.
High-profile cases of unwanted discrimination reported in the media.
But let’s start with briefly mentioning some of the high-profile cases of unwanted discrimination that have been reported amply in the media:
- COMPAS. The US criminal system uses an AI system, called COMPAS to assess the likelihood of defendants committing future crimes. It turned out that the algorithm used in COMPAS systematically discriminated against black people.
- Amazon had to withdraw an AI system that automatically reviewed job applicants’ resumes because it discriminated against women.
- Google had to change its Google Photos AI algorithm after it recognized black people as Gorillas.
Sparked by those high-profile cases, several approaches have seen the light that deal with the identification and mitigation of unwanted discrimination. IBM has developed an open source toolkit, called AI Fairness 360, that provides tools to detect the rate of bias in data sets and to mitigate the bias. Pymetrics, a data science company focused on recruiting, developed open-source software to help to measure and mitigate bias. Aequitas of the University of Washington is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias.
Main approaches to detect and mitigate unwanted discrimination.
In general, there are three major approaches to detect and mitigate unwanted discrimination in Machine Learning:
- Pre-processing: in this approach, biased variables are transformed into non-biased variables before the training of the algorithm begins.
- In-processing: in this approach, apart from optimizing the target variable (the goal of the algorithm), the outcome is also optimized for having no discrimination, or the least discrimination possible.
- Post-processing: this approach only acts on the outcome of the model; the output is manipulated in such a way that no undesired discrimination takes place.
There are several criteria for measuring fairness, including independence, separation, and sufficiency. Telefónica is developing LUCA Ethics a post-processing approach to comply with the separation criterion.
All those approaches help detecting and mitigating bias and unwanted discrimination by analyzing data sets or the outcome of the algorithm. However, they have one major assumption in common that is important to review. All approaches assume that the sensitive attribute against which should not be discriminated, is included in the data set. In the Amazon case on recruitment, gender forms part of the data set. In the COMPAS case, race is an attribute of the data set. This availability of the sensitive variable enables different kinds of checks on the data set such as its distribution, how balanced it is, etc, and, once the model is trained, this same variable also enables to check whether the model discriminates based on this sensitive variable, which usually corresponds to a protected group.
Real-world datasets.
But what happens when the data set doesn’t contain any explicit sensitive variables? Not surprisingly, most real-world data sets do not contain any sensitive variables because they are designed in this way. Since it is forbidden by law to discriminate against certain sensitive variables (see above for what is considered sensitive in Europe and the USA), most organizations make an effort to exclude those variables to prevent the algorithm from using it. Collecting and storing such sensitive personal data also increases the privacy risk for organizations.
If we think about the high-profile cases mentioned above, the sensitive variables (gender and race) actually were in the data set. For gender, we may expect it to be present in many data sets (that is why many if not most examples of bias and unwanted discrimination are illustrated by gender). In the COMPAS case, race is in the data set because it is a very specific (criminal) domain. However, we wouldn’t expect attributes such as religion, sexual life, race or ethnic origin to be part of typical data sets used by organizations. The question arises then of how to know that you are not discriminating illegally if you can’t check it? The simple technical solution would be to have this sensitive personal data available in the data set, and then check the outcome of the algorithm against it. This seems however at odds with many data protection principles (GDPR art. 5, data minimization, purpose limitation) and best practices. Let’s look a moment at possible ways to obtain sensitive personal data:
- The user could be asked for sensitive personal data to be included in the data. It might seem unlikely that users would allow this, but in several UK institutions, sensitive personal data variables are asked and stored. Users can always choose not to provide this information, but the option is there. It seems, however, unlikely that users will consent massively to this.
- Organizations could use their internal data combined with publicly available data sources to infer the value of sensitive personal data for each of their users.
- Organizations could perform a survey with a representative subset of their users and ask them for their sensitive personal data. One could even announce that the survey forms part of an approach to check for unwanted discrimination. Any new machine learning algorithm can then be tested against this subset for unwanted discrimination.
- Some “normal” variables are proxies for sensitive variables, such as for example postal code for race in some regions of the USA. If such known (and confirmed) proxies exist and they are in the data set, they can be used for testing the algorithm for unwanted discrimination.
The paradox between fairness and privacy.
This leads to an interesting paradox between fairness and privacy: in order to ensure that a machine learning algorithm is not illegally discriminating, one needs to store and process highly sensitive personal data. Some might think that the cure is worse than the disease. This paradox was recently highlighted in the context of Facebook advertising for housing when Facebook was accused to discriminate against race and gender. The automatic process of targeting ads today is very complex, but Facebook could try to infer for each of their users their race and gender (using its own data but also publicly available data sets), and then use this to avoid unwanted or illegal discrimination. But would you like Facebook or any other private company to hold so much sensitive personal data?
Given existing privacy regulations and risks, most organizations prefer not to store sensitive data unless it is absolutely necessary like for some medical applications. In that respect, from the four options mentioned above, the “survey” option seems to be the least risky option to equip organizations with a reasonable assurance of not discriminating.
Practical implications.
So, what does this all mean in practice for organizations? I believe most organizations are only starting to think about these issues. Only a few have something in place and are starting to check their algorithms for discrimination against certain sensitive variables, but only if they are available in the data set. For the cases where organizations do not have sensitive personal data in their data sets (and most organizations make an effort to exclude this data from their data sets – for obvious reasons as we saw), the current state of the art does not allow systematic checks. It is true that organizations are starting sensibilization campaigns to make their engineers aware of the possible risks of AI, and some are aiming to have diverse and inclusive teams to avoid as much as possible that bias creeps in the machine learning process.
Conclusion.
As a conclusion, if sensitive data is included in data sets, technically organizations can know whether or not they are discriminating in an unfair way. But when there is no sensitive data in the data set, they cannot know. This might not seem optimal, but it is the current state of play, which until now, we all seem to have accepted. I am, however, convinced that new research will also come up with solutions to tackle this problem, and thereby solving one of the most cited undesired, unintended consequences of Artificial Intelligence.
To keep up to date with LUCA visit our website, subscribe to LUCA Data Speaks or follow us on Twitter, LinkedIn or YouTube .
The post Is your AI system discriminating without knowing it?: The paradox between fairness and privacy appeared first on Think Big.