Data, Algorithm, “Artificial” Intelligence: What’s Wrong With That?

06/07/2021 |

By Mich/Michèle Spieler

Data storage and control by big corporations have become a resource for more exploitation, racism, and gender discrimination.

Ada.vc, 2017

There is a lot of hype around data, algorithms and artificial intelligence (AI) in many regions of the world. Sometimes it is about futuristic technologies like self driving cars, but there are already everyday applications that impact our communities in negative ways. As feminist, anticapitalist, anti-racist activists, we need to understand the implications and politics of these technologies, because in many cases they exacerbate wealth and power inequalities and they reproduce racial and gender discrimination.

Data, algorithms and artificial intelligence are playing more and more of a role in our lives, even if most of the time we might not even be aware of their existence. Their impacts sometimes may be equally invisible. Never the less, they are related to all our struggles for a more just world. Access to these technologies is unequally distributed and is shifting power even more towards powerful institutions like militaries and police, and companies. Only a handful of private actors do have the computational capacity to run the larger AI models, which is why even universities depend on them for research. The data, on the other hand, is produced by us on a daily basis, sometimes consciously, but sometimes simply by carrying a smartphone around without even using it.

Samuel Daveti, Lorenzo Palloni, Alessio Ravazzani / Automating Society Report 2020

How is this data being used in ways that harm us and our communities, and exacerbate systems of oppression?

My exploration of this question is shaped by where I am living (Turtle Island, Canada, Quebec) and is drafted as a contribution to addressing our collective challenge to share and learn from our experiences and analyses in different parts of the world.

A few years ago, the Facebook—Cambridge Analytica Scandal made a lot of headlines, where data was used to influence votes and elections in Britain and in the US. Often we only find out such facts thanks to whistleblowers, as there is a complete lack of transparency surrounding the algorithms and the data that goes into them, which makes it difficult for us to know about their impact. A look at a few examples will demonstrate how these technologies and the way they are deployed change how decisions are made, worsen working conditions, and exacerbate inequalities and oppression —all while having a negative impact on the environment.

Automated decision making (ADM) systems are systems using data and algorithms to make decisions that otherwise would be made by humans. They are changing not only how decisions are made but also where and by whom. In some cases, they displace the decision making from the public realm to private spaces, or actually hand over control over public space to private companies.

Some insurance companies have implemented ADM and AI technology to determine whether claims are legitimate. They claim it is a more efficient and cost-saving way to make those decisions. But what data is used and what criteria are being applied in making these determinations is often not public but considered part of a company’s trade secrets.

In some cases, insurances also use data to predict risks and calculate fees based on expected behaviour. Which is just a novel way to undermine the solidarity principle at the base of collective insurance and to further neoliberal, individualistic principles. Furthermore these models use past data to predict future outcomes, which makes them inherently conservative and predisposed to reproduce or even exacerbate past discrimination. While not using race as a datapoint directly, indicators like postal codes often serve as proxy for race and those AI models tend to discriminate against communities of colour.

Not only private companies, but also governments count on AI based systems to deliver services more efficiently and to detect fraud — which is often code for cutting back services. Chile is among the countries who has started a program to use AI to manage health care in order to reduce waiting times and make decisions on treatment. Critics of the program fear that the system causes harm because it perpetuates biases based on race, ethnic or national origin, and gender.

Argentina developed a model in collaboration with Microsoft to prevent school dropouts and teenage pregnancy. Based on information like neighbourhood, ethnicity, country of origin or hot water supply, an algorithm predicts which girls are likely to get pregnant and the government then targets services. But the government is using this technology to actually avoid implementing comprehensive sexual education, which by the way does not factor into the model to predict teenage pregnancy.

Under the label of“Smart Cities”local governments are handing over entire neighbourhoods to private companies to experiment their technologies. Sidewalk Labs, a subsidiary of Alphabet (the company that owns Google), wanted to develop a neighbourhood in Toronto, Canada, and collect vast amounts of data on residents in order to for example predict their movements to regulate traffic. The company had plans to even levy its own property taxes and control some public services. Had it not been for activists mobilizing against this project, the government would have handed over control over public space to one of the largest and most powerful private companies in the world.

Giving private companies decision making power over public space is not the only issue with “Smart City” initiatives. As an example from India shows, they also tend to create surveillance on a massive scale. The police in the city of Lucknow recently announced a plan to use cameras and facial recognition technology (FRT) to identify women who are in distress based on their facial expressions. Under the guise of fighting violence against women, several Indian cities have spent huge amounts of money to implement surveillance systems, money that could have gone to community lead projects to fight gender violence instead.

Rather than addressing the underlying issues, the government perpetuates existing patriarchal norms by creating surveillance regimes. Moreover, facial recognition technology has been proven to be significantly less accurate for anyone who is not a white cis man and emotion-detection technology is considered deeply flawed.

AI is leading to increased surveillance in many areas of life in many countries, but mainly in liberal democracies: from proctoring software thatmonitors students taking exams online to what is called “smart policing”. This leads to the overpolicing of already marginalized communities. A good example are body cams that are being touted as solutions against police brutality and serve as counter arguments against calls to defund or abolish the police.

From a feminist perspective it has to be noted that surveillance technologies not only exist in the public space, but also increasingly play a role in domestic and partner violence.

Law enforcement bodies also create “gang databases” that lead to increased criminalization of racialized communities. Private data-mining companies like Palantir or Amazon are known to support immigration agencies in their effort to deport undocumented immigrants. AI is being used to predict where crimes will occur and who will commit them. As these models are based on past data of crimes and criminal records, they are heavily biased against communities of colour. Additionally, they can actually contribute to crime rather than preventing it.

Another example how these AI surveillance systems enforce white supremacy and hetero-patriarchy are airport security systems. Black women, Sikh men, Muslim women are subjected to invasive searches more often. And because the models and technologies enforce cisnormativity, transgender and non-binary people are flagged as deviant and scrutinized.

Surveillance technologyis not only utilized by police, immigration agencies and militaries. Companies are increasingly surveilling employees through AI. Like in any other context, surveillance technology in the workplace reinforces existing discrimination and power disparities.

This development may have started within the big data and platform companies. But, as the most expanding sector, data capitalism imposes new working conditions on not only people who work in the sector, but beyond. Amazon might be the most well known example of such surveillance, where workers are constantly monitored and if they repeatedly fall behind productivity rates are automatically fired.

Other examples include the clothing retail sector, where tasks like how to display merchandise now are decided by algorithms and rob workers more and more of their autonomy. Black and people of colour, especially women, are more likely to find themselves in insecure, lower-paying jobs and therefore often are the ones affected most by this dehumanization of work. Platform companies like Amazon or Uber, backed by enormous amounts of investment capital, not only change their industries, but are able to impose changes in legislation to undermine workers’ protection that will impact whole economies. They did so in California, claiming that the change would create better opportunities for workers of colour. But a recent study has found that it actually “legalized racial subordination”.

We’ve seen so far that AI and algorithms contribute to power disparities, shift sites of decision making from public space to intransparent, private companies, and exacerbate harm inherent to racist, capitalist, heteropatriarchal and cisnormative systems. In addition to that, these technologies often pretend to be fully automated, when in fact they are based on large amounts of low paid labour. And when they are fully automated, they can consume outrageous amounts of energy, like demonstrated in the case of some large language models. Bringing such facts to light, has cost prominent researchers their job.

Activist strategie sto resist against these technologies and/or make visible the harm they cause

These strategies often start with understanding the harm that can be caused and documenting where technologies are being deployed. Our Data Bodieshas produced the “Digital Defense Playbook”, a popular education material on understanding how communities are impacted by data-based technologies.

Not My AIfor instance is mapping biased and harmful AI projects in Latin America.Organizers Warning Notification and Information for Tenants (OWN-IT!) have built in Los Angeles a database to support renters fighting against rent increases. In response to predictive policing technology, activists created theWhite Collar Crime Risk Zones map to predict where in the US financial crimes are most likely to occur.

Some people have decided to no longer use certain tools, like Google’s search engine or Facebook and thereby refusing to provide them with more and more data. They argue that the issue is not the individual data,but the aggregation that is used to restructure the environments that extract more from us in the form of data and labour, which are getting more and more opaque.

Another strategy is obfuscation or data poisoning: activists have created plugins that randomly click Google ads or randomly like Facebook pages to throw the algorithms off. There are also ways to make it impossible for AI to recognize faces in photographs and use them to train algorithms.

An altogether different approach is presented by the The Oracle for Transfeminist Technologies, a card-deck that invites to collectively envision a different technology.

Indigenous peoples on Turtle Island (US and Canada) are all too familiar with surveillance and having large amounts of data collected on them and used against them. Out of this experience, they have created approaches of Indigenous Data Sovereignty: principles on ownership, collection, access and possession of data in order to avoid additional harm and allow First Nations, Métis and Inuit to benefit from their own data.

AI, algorithms and data-based technologies are not only problematic because of issues with privacy, a lot more is at stake. While we organize for our struggles, we are likely to use technology that produces data for the companies that benefit most from data capitalism. We need to be aware of the implications of this, of the harms these technologies cause and how to resist them, in order to mobilize effectively.

____________________________________________________________________________

Mich/Michèle Spieler, based in Montréal/Tiohtià:ke/Mooniyaang, has been passionate about the question how technology contributes to or can help eliminate oppression for a long time. They work as a Community Technology Co-coordinator at COCo (Centre for Community Organizations), and have been involved in several feminist media projects. Have been a long time activist with the World March of Women.

Related articles