The Futures of Democracy

Artificial Intelligence and Automated Decision Making

April 18, 2022 PBS: Public Broadcasting Service Season 1 Episode 2
Artificial Intelligence and Automated Decision Making
The Futures of Democracy
More Info
The Futures of Democracy
Artificial Intelligence and Automated Decision Making
Apr 18, 2022 Season 1 Episode 2
PBS: Public Broadcasting Service

In this episode we explore the impacts of artificial intelligence and automated decision making on democracy. In what ways do AI and algorithmic decision making systems impact the human rights of citizens and reinforce social inequality? Does AI shape our perceptions of the world and is this a democratic problem? What are the ethical issues and problems with AI and what accountabilities might governments have to citizens in their roll out of AI to assist in governing society? Joining us for this episode is Kate Crawford. one of the world's leading experts on the social, political, and material aspects of AI. Kate is the author of the book 'Atlas of AI' out now on Yale University Press.

Executive producers and project concept: Nicole Anderson, Julian Knowles
Series writers and researchers: Nicole Anderson, Julian Knowles
Production, sound design, and original music: Julian Knowles
Project funders/supporters: The Institute for Humanities Research, Arizona State University.  PBS: Public Broadcasting Service

Visit us online at https://futuresofdemocracy.com/

Show Notes Transcript

In this episode we explore the impacts of artificial intelligence and automated decision making on democracy. In what ways do AI and algorithmic decision making systems impact the human rights of citizens and reinforce social inequality? Does AI shape our perceptions of the world and is this a democratic problem? What are the ethical issues and problems with AI and what accountabilities might governments have to citizens in their roll out of AI to assist in governing society? Joining us for this episode is Kate Crawford. one of the world's leading experts on the social, political, and material aspects of AI. Kate is the author of the book 'Atlas of AI' out now on Yale University Press.

Executive producers and project concept: Nicole Anderson, Julian Knowles
Series writers and researchers: Nicole Anderson, Julian Knowles
Production, sound design, and original music: Julian Knowles
Project funders/supporters: The Institute for Humanities Research, Arizona State University.  PBS: Public Broadcasting Service

Visit us online at https://futuresofdemocracy.com/

The Futures of Democracy Podcast – Series 1, EP 2. Democracy, Artificial Intelligence, and Automated Decision Making
Presenters:
Julian Knowles and Nicole Anderson
Guest:
Kate Crawford
Publisher
: PBS - Public Broadcasting Service


Julian Knowles
Welcome to the Futures of Democracy, we're your hosts - I'm Julian Knowles (and I'm Nicole Anderson). In this episode, we examine the impact of artificial intelligence and the internet on democracy. Given the significance of recent world events in this area we have devoted two episodes to exploring it.

Joining us for this episode is Kate Crawford one of the world’s leading experts on the social impacts and politics of AI.

Kate Crawford is a leading scholar of the social and political implications of Artificial Intelligence. Over her 20-year career her work has focused on understanding large-scale data systems, machine learning, and AI in the wider context of history, politics, labor, and the environment. Kate is a research professor at USC Annenberg, a Senior Principal Researcher at Microsoft Research New York City, and an Honorary Professor at the University of Sydney. Kate has advised policy makers in the United Nations, the Federal Trade Commission, the European Parliament, the Australian Human Rights Commission, as well as the White House. She is the author of the book “Atlas of AI” on Yale University Press that examines the social, material, and political dimensions of Artificial Intelligence.

Nicole Anderson
Kate I would like to begin with some clarifying questions before Julian delves into a more detailed discussion with you about the implications of Artificial Intelligence for democracy. The first, is how do you define technology and Artificial Intelligence, or AI, and second, how would you define or describe how algorithms work?

Kate Crawford
Well these are these are very good questions to begin, and defining artificial intelligence is something that I think is actually a difficult task. You can define it technically and certainly at a technical level we could look at a constellation of techniques that are currently defined as ‘machine learning’. But machine learning is really just the latest in a long history of techniques in artificial intelligence that take us all the way back to 1956 when the term was first developed at the Dartmouth Conference, and at that time the theory was that you could really train machines to understand complex human systems like language. And it was about teaching them grammatical principles and this was really the idea of expert systems in AI.

But what we've seen over decades is a shift to what is called the ‘statistical turn’ or rather than understanding structures of meaning to train machines to detect patterns - essentially large-scale pattern recognition. So, we've reached a point in the history of AI where machine learning is dominant: it's using a neural network-based approach but is essentially (very simply put) large-scale pattern detection. Now, the other thing I would say here is that sometimes we tend to think about AI as just being technical - as being a series of you know algorithms in the cloud if you will. But it's actually much more than that and part of what I do in my research is to really expand the definition of artificial intelligence to also look at, in addition to the technical practices, the social practices: who's in the room; who are the people who make the systems; what are the problems they're trying to solve, and then, thirdly, also the infrastructures behind AI. So what are these planetary scale infrastructures like the cloud? How do they work? Who owns it? All of those broader political economy questions are also really relevant to how we define AI. So in this sense I'm giving you a much more materialist definition one that looks at the practices of how AI is actually made in the world and what the consequences are.

Nicole Anderson
Do you think AI and technology more broadly then strengthen or weaken democracy? And how?

Kate Crawford
So, to unpack a question like that would be really important to start thinking where artificial intelligence is being deployed in the world and how it might be having democratic impacts. 

So right now, AI systems are being used in spaces like healthcare and in criminal justice, in hiring, in so many domains that you may not even be aware that machine learning systems are sort of working in the background. And in some of those domains we might think well that looks pretty straightforward, but they can have unexpected democratic impacts. So, rather than just assuming that AI is, you know, having a political impact when it's being deployed, say in debates around the news or what you see in Facebook, it's also having political impacts in terms of who gets access to the best healthcare or who is being you know determined by algorithm that they should be released on bail. Or who is being chosen for a job interview based on a machine learning system that's scanning the resume. This is really, at a deep level, a question of how opportunities are being allocated in society but also how people are being represented and interpreted. 

So, I've thought about this in my research in terms of a difference between 'allocative' harms - so allocating responsibilities and opportunities, and then these more 'representational' harms - how entire groups of people are represented be it around race, gender, ability, and so forth. So, at these levels, these ideas of how democracy works - who is important, who gets to speak, who gets opportunities - at this level, AI is being interwoven into all of our social institutions simultaneously and I think that raises a very real question about how democracy works.

MONTAGE BEGINS

Elon Musk
I’m really quite close to…. I’m very close to the cutting edge in AI, and it scares the hell out of me.
 
Mark Zuckerberg
With AI especially, I’m really optimistic. And I think that people who are naysayers and kind of…  try to draw up these doomsday scenarios...

Elon Musk
The base issue I see with so-called 'AI experts' is that they think they know more than they do.

Huan Ton-That (Clearview AI founder)
Well quite simply, Clearview is basically a search engine for faces. So, anyone in law enforcement can upload a face to the system and it finds any other publicly available material that matches that particular face.

Interviewer
Do you understand why people find this creepy?

Huan Ton-That
I can understand people having concerns around privacy

Interviewer
What if it identifies the wrong person and that leads to a wrongful conviction? Are you worried about that?

MONTAGE ENDS


Nicole Anderson
What Kate Crawford refers to when she says AI raises questions about the process of democracy is the notion of equality and transparency for the people who have elected their governments. What is about to be discussed is that AI can take away the transparency of the democratic process of government, as well as democratic society as a whole. And the core values that Kate Crawford refers to include: the right for voters to possess basic civil liberties, freedom of assembly, freedom of speech and association, freedom of the press (to report on and hold accountable the governments elected by the people, and the people’s right to information and transparency), as well as the right to inclusiveness, equality and liberty. Kate Crawford.

Kate Crawford
Another interesting factor here is to think about the way in which democracy is often understood as a way in which people can participate and make decisions in how their societies change and grow.

But many of these AI systems are really systems that, first of all, are uninterpretable. So even the engineers who are designing a deep learning system cannot tell you why it's reaching the results it's reaching. So that represents a very real problem in terms of transparency and accountability: these kinds of core values that are often closely associated with democratic societies.

So, at a certain level, even at the level of design of how AI systems work, we are coming up against some hard problems in terms of how would people be able to see into these systems? How would they be able to make decisions about whether they want to use them in their lives or not? And how would you say I disagree with this decision; I would like to actually appeal a decision that's made by an AI system. What are the kinds of practices of due process that have been understood as being core to democracies since fifteen hundred?

Nicole Anderson
In what follows, a discussion between Julian Knowles and Kate Crawford, refers to the proliferation of systems for automated decision making, many of them developed by commercial entities, with the common objective of achieving efficiencies in government administration. A number of these systems have resulted in significant harm to citizens and human rights abuses. Three examples that are mentioned are the Robo-debt scandal in Australia and the MIDAS system in the US (both targeting welfare fraud), the Horizon system in the UK (targeting financial irregularities in the postal service), and the Compass System in the US (assessing violent re-offenders). This raises the question around how democracy functions when its elected governments do not understand the social implications of AI. Julian Knowles.

Julian Knowles
Kate in traditional democracies, we have citizens who vote governments into power and then charge them with the responsibility of making decisions on their behalf. Now, with the emergence of AI, governments are increasingly delegating these decisions to crude forms of AI, which may have serious social consequences. One of the examples we know well is the Robo-debt scandal in Australia, where AI investigated social security payments and correlated them with bank accounts in an attempt to locate fraud and overpayments. And in many cases, it got it wrong - it wrongly accused people of fraud. So, in the end it was the most powerless members of society - our welfare recipients - who were pitted against a hostile AI system that caused them further harm.

While we can vote for governments, we don't have an opportunity to vote for this delegation of power to algorithms that make real decisions about people's lives. So, my question is, do you think there need to be tighter regulations around how governments use AI-based automated decision making and greater transparency around when they're using it?

Kate Crawford
Well, this is a really important question, Julian, and I think certainly the case of Robo-debt is a really good one in terms of understanding how these systems can go wrong and who pays the price. It really is so commonly the people who are the least empowered to actually go through an expensive appeals process.

A very similar case emerged in the United States over the last decade in Michigan. There was a system called the MIDAS system - believe it or not - which was there to actually (again) assess welfare recipients and (again) it was actually inaccurately making a series of claims around fraud, which then produced a whole series of very profound harms including bankruptcies and suicides.

We've had another case of course - not an AI system - but just a very basic algorithmic system that was being used in the UK, and it was being used on postal workers to decide who might actually be defrauding the postal system and embezzling funds, and again over seven hundred people were actually detected by this system and then went through an entire kind of judicial process. And this was extraordinarily painful to so many families and again these are systems where people were saying ‘this is not true’. But they had no appeals mechanism. There was no way to look into how these decisions were being made by these systems. So, you are pointing to a very core feature of what we're seeing in the outsourcing of predictions and decision-making systems throughout all of these social institutions. The Compass system is another system that was used to predict whether people would be violent reoffenders in the United States. And again, a series of scandals around inbuilt forms of racial bias in that system too.

So, what we're hearing, I think, from so many researchers - and there are now many researchers looking at these systems - is first of all, how do we audit algorithmic systems to see how they're actually working, and how they might be impacting people? That's certainly one path. I think it makes a lot of sense. But at a deeper level how are we making decisions as citizens that we want these systems, that are in many cases flawed and problematic, to play such an important role in deciding how people should live? And that, to me, is actually a core democratic question - where do we actually get to vote on these questions?

And time and time again, as you say, governments are making these decisions and simply saying “well as your representatives, we're deciding that we will use the following systems”. And here, I think, there are possible avenues, which is, how might you actually start to make governments accountable for when they actually deploy these systems? And I co-authored a paper with a legal scholar, Jason Schultz, on thinking about AI as state actors, about the way in which states are delegating responsibility to AI, and does that create new avenues for legal responsibility for governments and nation-state actors?

Julian Knowles
I wanted to move us on now to this idea of how these invisible forces are shaping us but are not being named in various ways, because I think that is a really important point. In your book you quote Woody Bledsoe, who says (quote): “In the long run AI is the only science” (unquote), to which you respond (quote) “This is a desire not to create an Atlas of the world, but to be THE Atlas; the dominant way of seeing. The colonising impulse centralizes power in the AI field, it determines how the world is measured and defined, while simultaneously denying that this is an inherently political activity” (unquote). And then later on you say (quote) “Artificial Intelligence, in the process of remapping and intervening in the world is politics by other means although rarely acknowledged as such” (unquote). Which relates to what we were talking about before, that these are invisible forces that are making decisions, shaping us in various ways, and therefore, inherently political. But to take it one step further, which you do, you then say that (quote) “AI is now a player in the shaping of knowledge, communication and power, and machine learning presents us with a regime of normative reasoning that takes the shape of a powerful governing rationality” (unquote).

And so, my question is, would you go so far as to say that AI is now a player in shaping our perceptions of reality, and if we accept that a small number of big tech corporations are now in a position to shape reality in various ways, does this pose a baseline threat to democracy?

Kate Crawford
Well there's a long and a short answer to that question. I think the short answer is ‘yes’. I think we have actually; we are now living in a moment where so many of our technical systems are actually defining and predicting who we are and our value and worth in the world. So, that can be at the level of social scoring - people have heard of the social credit score in China, but in actual fact it's far more widespread and it happens just as much in the West as it does in China. We're being scored by so many different systems and so many different machine learning adjudications are happening all the time. 
 

MONTAGE BEGINS

News presenter
Everywhere she goes OuYang Hauyu is followed: what she buys, how she behaves is tracked and scored to show how responsible and trustworthy she is. It’s called the ‘social credit system’. A person’s reputation is scored on a scale of 350 to 950. And Hauyu, with a good score of 752, is OK with it. In fact, most people are.

OuYang Hauyu
It’s a mechanism… like.. it pushes you to become a better citizen

News presenter
Thanks to advances in artificial intelligence and facial recognition, and a web of more than 200 million surveillance cameras.

Joe Rogan
Like.. they have apps on their phone - right now - that give them a social score. It’s scary! And if we give in to that kind of surveillance over here, there’s a real dark end to all that stuff!

Sam Parker (credit score repair sales)
Hey everybody! Sam Parker here, CEO of My Credit Guy credit restoration! In past weeks we’ve talked to you about payment history which makes up 35% of your scoring algorithm. We’ve talked to you about credit utilization, which makes up 30% of your scoring algorithm. Last time we talked to you about…..

Huan Ton-That (Clearview AI founder)
Clearview is basically a search engine for faces.  So, anyone in law enforcement can upload a face to the system and it finds any other publicly available material that matches that particular face. 

What’s the best application of this technology? We found that law enforcement was such a great use case.

Interviewer
How many law enforcement and intelligence agencies are using this tool at the moment?

Huan Ton-That
Over 600 now.

MONTAGE ENDS


Nicole Anderson
Over the past decade, the Chinese Communist Party has been constructing a system of morally ranking its citizens. The system uses tracking and AI to monitor the behaviour of its population through facial recognition, location tracking, communications monitoring, daily habits like shopping, time spent on video games, spending patterns, bill payments, social media posts, traffic and criminal offenses, and so on. The system then allocates scores for behaviors and ranks citizens based on their ‘social credit’. If you score low on the scale, then various punishments are issued. These include travel bans, and slow internet, limitations on class of travel, and access to good hotels.  Kate Crawford now goes onto talk about how AI systems profile citizens. Kate Crawford.

Kate Crawford
In some ways it really cuts to core questions of identity. So, one of the things that I've studied a lot is how machine learning systems actually allocate race, gender, even character, personality, morality, emotions, criminality, and sexuality, based on images of people's faces. So, in the realm of computer vision, it's extremely common that computer vision systems will be simply tracking a picture of you - perhaps it's on Facebook, perhaps it's as you walk down the street - and then running a series of algorithmic predictions around who you are.

Now, the more that I look into these systems, the more that I am just appalled at how problematically narrow the definitions of ‘human’ really are. So, of course, it should be no surprise that researchers like Os Keyes recently published that 95% of all papers in gender detection in AI use a binary model of gender. So again, immediately erasing anybody who sees themselves as living outside of that sort of binary model.

Again, we've also looked at the way that race is being automatically detected using these profoundly problematic four-part categories of race, which really sound like something that comes out of apartheid South Africa in the twentieth century. And then, you could go even earlier back to the Samuel Morton’s of the world who collected skulls and tried to define, you know, the four races of the world. We're seeing this sort of return to those phrenological and, I would say, quite unscientific ways of understanding human beings.

And again, we're now seeing it in studies that claim to predict your sexuality, but again using a binary model of ‘straight’ or ‘gay’. I mean, these kind of, I would say, almost deracinations; these calcified ideas, that are profoundly normalizing, yet extremely narrow, are built into technical systems that are assessing you and assessing your worth absolutely all the time.

So, that to me represents a very real democratic question: It's what kind of a world do we want to live in? What kind of impact does that have on civil society generally? And what does it mean in terms of, not just how we understand other people, but how we understand ourselves at a core level? When humans are so rich in complexity and constantly in change and in flux, and yet we have these systems that are trying to pin people to labels like so many taxidermy-ed Butterflies. It's a deep and profound problem in the way that humans in the world are being classified by these technical systems.

Julian Knowles
That’s really interesting, because I was thinking about this lone Silicon Valley machine learning programmer and the way they view the world. And the algorithms that get rolled out are really just a single person's view of how you might classify things like race, gender, and so on.

And so, how do tech companies make it a more nuanced, more representative, less biased system when the programming teams may be so limited …

Kate Crawford
That's it! And that's it. And what you've just pointed to is the dynamic that we're seeing in Silicon Valley is that when researchers like myself make these kinds of points, the response that you get is well we can add more categories. You know we'll remove some of the more offensive terms, in terms of how we categorise people and will introduce more demographic variety rather than actually looking at the core problem which is: should we be classifying people at all? Is that type of centralised power, and this desire to put people into small boxes, is that itself a broken methodology? And I would argue that it is, and I would say that we've hit an absolute hard limit, which is not that these systems need to be de-biased or they need to be in some ways made more diverse, we actually need to say some of these things are not appropriate for machine learning to be doing at all, and we hit a hard limit.

Another one of the examples I use in the book is looking at emotion detection. So, these are systems that while you're say doing a job interview or a kid who's doing remote schooling their face is being analyzed for a series of micro expressions, as they're called, and then put into one of six categories of universal emotions. So, you're either feeling happy or sad or angry or surprised, etc. Now I trace this idea of six universal emotions back to a psychologist in the 1960s by the name of Paul Ekman who really started to push this idea despite the fact that there was a lot of people who completely disagreed with him, like the anthropologist Margaret Mead who said there are no consistent singular emotional categories that are cross cultural and across time. And of course in very recent years the psychologist Lisa Feldman Barrett did an enormous survey study and showed definitively that there is no consistent relationship between an expression on your face and how you really feel on the inside. This idea that the face is the window to the soul has been scientifically disproven, yet nonetheless these technical systems have this idea baked into them. You know we have large companies like Goldman Sachs that have used these systems that do actually try to detect from your face whether you'll be a good employee and using these kinds of really deeply broken heuristics. So we're finding these kinds of ideas from the past being built into the technical systems that will determine our futures, that again is a core democratic question.

Julian Knowles
Kate thank you so much for talking with us today.


Nicole Anderson
In the next episode Julian Knowles and I will extend this current conversation about the implications of AI for our democratic processes into the realms of AI and technological regulation, government and corporate accountability and democracy. Joining us again will be Kate Crawford, along with philosopher of technology Adam Nocek, Cybersecurity specialist Jamie Winterton, and world leading scientist Peter Schlosser.

Subscribe to us on your favorite podcast service, so you can be alerted to new episodes when they arrive, or visit us on the web at https://futuresofdemocracy.com/
______

Transcript copyright Julian Knowles and Nicole Anderson (2022). All rights reserved. Free for educational use with attribution.