The Futures of Democracy
The Futures of Democracy
Artificial Intelligence, Transparency and Accountability
In this episode we continue our conversation about AI and democracy to examine the issue of government and corporate accountability and the impact this has on democracy. We also examine the ethics of personal data mining and its use for political purposes.
Do we know we are being classified and scored on a daily basis? What data is being collected from us, how is it being used, and what ethical questions does this pose? What kinds of controls might there be on the use of AI and personal data? How do we re-trace the steps of AI and make it accountable?
Joining us again is Kate Crawford a leading scholar of the social and political implications of artificial intelligence, along with Adam Nocek a leading philosopher of technology, and scientist and education leader Peter Schlosser.
We begin by exploring what we mean by ‘intelligence’ in the context of AI, because it’s clear that the term is used uncritically in many situations. Can we really say that AI is intelligent, when in the previous episode we saw how crude and unsophisticated it can be? And what are the political, ethical and democratic consequences of AI, given its clear limits and deficiencies?
Executive producers and project concept: Nicole Anderson, Julian Knowles
Series writers and researchers: Nicole Anderson, Julian Knowles
Production, sound design, and original music: Julian Knowles
Project funders/supporters: The Institute for Humanities Research, Arizona State University. PBS: Public Broadcasting Service
Visit us online at https://futuresofdemocracy.com/
The Futures of Democracy Podcast – Series 1, EP 3 'Artificial Intelligence, Transparency and Accountability
Presenters: Julian Knowles and Nicole Anderson
Guests: Kate Crawford, Adam Nocek, Peter Schlosser
Publisher: PBS - Public Broadcasting Service
Nicole Anderson (VO)
Welcome to the Futures of Democracy. We’re your hosts – I’m Nicole Anderson, and I’m Julian Knowles. This is the second part of our discussion on the impacts of AI on Democracy. In the last episode, we discussed the broader social implications of AI, the way in which it is rapidly entering all of our social institutions, the rise of human classification and social scoring, as well as the impacts of the rise in automated decision making on democracy and society at large.
Do we know we are being classified and scored on a daily basis in western society? What ethical questions does this pose? What kinds of controls might there be on the use of this technology? Where is AI making decisions about our lives and how do we re-trace its steps and make it accountable?
In this episode Julian Knowles and I pick up the conversation about AI and democracy to examine the issue of government, and corporate accountability and transparency and the impact this has on democracy. Joining us again is Kate Crawford a leading scholar of the social and political implications of Artificial Intelligence, along with Adam Nocek a leading philosopher of technology and scientist and education leader Peter Schlosser.
We begin by exploring what we mean by ‘intelligence’ in the context of AI, because it’s clear that the term is used uncritically in many situations. Can we really say that AI is intelligent, when in the previous episode we saw how crude and unsophisticated it can be? And what are the political, ethical and democratic consequences given the clear limits and deficiencies of AI? Julian Knowles.
Julian Knowles
Kate, in your book, you point out that (quote) “AI is neither artificial nor intelligent” (unquote). And I wonder what this really means in the context of an autonomous agent that’s making decisions about people? The issue that I see is that AI isn't sentient, it lacks empathy. In other words, AI doesn’t experience emotion or understand affect.
And so, if AI is making decisions about humans - who are sentient beings, who have feelings - there are clearly impacts from artificial intelligence that sit in an emotional realm and they can cause real harm, not just from poor decisions, but from this fundamental lack of empathy towards humans around decisions.
And so, we can draw a clear line here: a lack of sentience means that AI does not have empathy, which leads to harm. Is this something that you think is a real problem with AI?
Kate Crawford
You've put that really well, and certainly in my work by really critiquing this idea of artificial intelligence, it is itself a trap because when people hear the term ‘artificial intelligence’ they assume that ‘oh, this system is intelligent Oh, and if it's artificial, perhaps it's also somehow more objective or more rational than a human being’.
But, in actual fact, by looking at the ways in which AI is made, and what it is built from, both in terms of its materials; in terms of its minerals and the energy and the things that it takes from the environment, the numbers of people that are involved in making AI, not just the AI designers and the ones that we hear so much about in Silicon Valley, but the click workers who are painstakingly labelling pieces of data often for less than $2 an hour, it really is digital piecework.
And then, all the way through to the amount of data that is being extracted from us to actually make these systems work. It is the opposite of artificial. It's actually profoundly material.
MONTAGE BEGINS
Journalist
This is the Kibera slum in Nairobi, Kenya. This is Brenda, she is among a team of around two-thousand people who work in this building for Samasource [now Sama], an organization that recruits people from the very poorest parts of the world. And their important job is to give artificial intelligence its ‘intelligence’. If you want to know what a tree is, it takes millions and millions of pictures of trees. That’s what’s called ‘training data’, and it’s here where that data is created.
Samasource clients include Google, Salesforce, E-Bay, Yahoo, and many others, working on everything from self-driving cars to online shopping. While most of their employees are of course in the developing world, the company’s headquarters can be found in San Francisco’s ‘Mission District’
MONTAGE ENDS
Nicole Anderson (VO)
So what Kate is suggesting here is that the harms of AI can extend beyond poor decision-making systems to the material realms of labor, the exploitation of workers, human bodies, and environmental extraction. And we need to be thinking about the potential harms of AI in a much broader context. Kate Crawford.
Kate Crawford
But then we look at this term ‘intelligence’, and I think with ‘intelligence’ there's this belief that these are systems that can perform cognitive tasks. Well again I think this is a misnomer because in actual fact, these again, are just systems of large-scale pattern recognition. Now pattern recognition is indeed one of the things that we do that's an important feature of intelligence. But it's by no means even a small component of all of the ways in which intelligence expresses itself in humans.
So, we're really looking at a system that is nothing like human intelligence and ascribing a depth to it that simply isn't there. And the more time that I have spent studying these systems - I mean I've had a 20-year career of looking really closely at large scale technical systems - I am always shocked that these systems are far more stupid than people actually realize. And yet we are assuming a high level of capability and competence and insight from systems that simply don't have it. And working with the historian of technology, Alex Campolo, we came up with this term ‘enchanted determinism’. And it's this sort of phenomenon whereby people see technical systems as both ‘enchanted’, which is that they're magical, they're alien, they're superhuman, they could do things that we cannot do, but at the same time are ‘deterministic’, in that they can look at a set of signals and then quite reliably predict what's going to happen.
This is an extremely dangerous phenomenon, because it means that these systems are being seen as beyond human understanding, and beyond human and legal regulation. And I think it's a big part of why we've seen a hesitancy around regulating technology companies and technological systems, because we're certainly in a place now where the speed at which these systems are being designed and rolled out and tested on millions of people every day has by far and away outpaced the regulatory and policy responses.
Nicole Anderson
Adam, you have recently published a book called Molecular Capture: The Animation of Biology (Minnesota, 2021), and you are working on another called Governmental Design: On Algorithmic Autonomy. As a philosopher of technology, you have explored the problems with the way in which we conceive of AI, the speed of its deployment within democratic society, and the problems this poses for controlling or regulating AI. What do you think about the term ‘intelligence’ that Kate just mentioned?
Adam Nocek
As a philosopher of technology, there's a whole host of conceptual questions that have to be raised when it comes to Artificial Intelligence. Personally, I think we need to stop using the term ‘Artificial Intelligence’. I think it's based on a deeply anthropocentric analogy between human cognition and what machines are doing. We need different conceptual frameworks. And I think this is important to recognize, as this isn't just a better socializing and integration technology. I think it's important to develop new ways of conceptualizing this technology, precisely not to conflate what machines are doing with what humans are doing. These are different modes of processing information and we need to get clear on that and I think there are a lot of political confusions that result when we start making this confusion.
Nicole Anderson (VO)
Regulation, and therefore accountability is clearly a big issue, especially in light of what both Adam and Kate just spoke about: the fact that AI makes decisions and shapes humans in particular ways, and we can see the implications of lack of accountability, for example, in social scoring and classification, which we see in China but also in the West. Julian Knowles and Kate Crawford.
Julian Knowles
As we discussed in the previous episode AI is being used to ‘score’ people in the west, as it is in China, and in almost all instances, the average person has no idea they are being scored or classified by these technological systems. Which begs the question, how do we first know this kind of thing is going on, and secondly, how do we then put in place controls around the deployment of social scoring and human classification? And what are the practical problems we face in doing it?
Kate Crawford
And therein lies the problem: what would that look like? And, as you say, even at the first post, even understanding where these systems are being deployed, how they work, these are questions that are shrouded in trade secrecy laws that are in many cases, you know, kept as corporate secrets that are extremely difficult as a researcher to find out. And in many cases, there are also laws that prevent researchers from reverse engineering these systems.
So, that ‘computer fraud and abuse’ act has been used to actually prosecute researchers who have tried to find how these systems are working and how they might be harming people. So, we have reached a point in history where when we need that information so urgently in order to make decisions about, you know, whether a system should be deployed or not. It's never been harder to get that information.
Nicole Anderson
Adam, what are your thoughts about how AI is deployed and the consequences for transparency?
Adam Nocek
The question of AI in this context is incredibly complicated. And you might say ‘why?’ Well, it's because AI technologies are used very often as technologies for governing and this defines the whole field of algorithmic governance. These are algorithms that are used as political technologies to recognize patterns in the data, to make recommendations and predictions, you know, and these predictions affect real people's lives. They put people on the no-fly list, they recommend arrests, they recommend prison sentences, and so forth.
So, you know, the trouble is that the kind of reasoning and deliberation required to make these choices in democratic societies when we're using AI technologies, they’re not visible. And this is one of the real tricky things about AI - you can't retrace the steps of an AI. You know, an AI makes recommendations and makes decisions, and so forth. We can't actually see the deliberation processes that it uses to judge and nudge us in particular directions. So, yes, AIs pick out patterns that humans cannot detect very often, but this isn't always a good thing when we're talking about our political technologies. So, this brings up a kind of bigger question that has to do with whether the technologies of governance … if our technologies of governance for democratic practices coincide with those technologies we use to govern [us] when we're using AI?
Nicole Anderson (VO)
Let’s pick up more on this topic of transparency: that we don't know what's going on with AI and how it’s being used. And if we don't know what's going on, and if there are commercial-in-confidence or national security laws preventing us from finding out, then that’s a real problem. If the law is operating in the name of capital or in the name of the nation-state, then a citizen can be blocked from ever investigating what is going on and holding governments and corporate entities to account. Julian Knowles asks Kate Crawford about this lack of governmental and corporate transparency, and uses the example of Edward Snowden, Julian Assange and Chelsea Manning.
Edward Snowden is responsible for providing press outlets with intelligence files from one of the world’s most secretive organizations – the NSA (National Security Agency). The Snowden Archive is a physical cache of the documents and files stored in various secret locations, many of them disconnected from the internet for protection from government agencies.
Wikileaks was founded by Julian Assange and, like Edward Snowden, released classified documents to the press. Amongst an extensive program of disclosures, it published a series of leaks provided by US intelligence analyst Chelsea Manning. Julian Assange is currently imprisoned in the UK, fighting extradition to the USA to face charges of spying and conspiracy to hack government computers. Julian Knowles and Kate Crawford.
Julian Knowles
Kate, in your book you speak about visiting the Snowden Archive. This idea of ‘government transparency’ has really loomed large in recent years through figures such as Edward Snowden, Julian Assange and Chelsea Manning from Wikileaks, all of whom were wanting to draw public attention to what governments are doing in the name of their citizens. And of course, as one might expect governments are not very happy about that!
So, there seems to be a fundamental mismatch between the intense level of surveillance and scoring that governments and corporations are doing on citizens, and the difficulty that citizens in turn have in getting information back out of governments; even the most basic information that's held about themselves.
So, this might be tearing at the fabric of society. And it may partly explain why there's a breakdown of trust that we're now seeing between citizens and governments, where authority is not trusted in the same way that it once was.
The question is, do you think that, in future, democracies will somehow have to navigate this imbalance of relative transparency between the citizen and the government in order to be able to operate effectively? To avoid people storming the Capitol, for example?
Kate Crawford
It is actually a really, really hard question, because in so many cases, while transparency is necessary, it's not sufficient. So, you know, in a study I did with Mike Ananny we did this study called ‘Seeing Is Not Knowing’ [Seeing Without Knowing] and really looking at even if you can see, for example, what data a technical system is using, and even if you can track the outputs that it's actually producing, that doesn't necessarily tell you how that's going to be impacting populations over time.
And ‘transparency’, I think, is so often asked to hold so much water for what we really want, which is ‘accountability’. Like, how do you make these systems accountable? You know, nobody should have to go and do a computer science PhD just to decide whether or not they want to be assessed by a crime prediction algorithm in their neighborhood. You know, you shouldn't have to be able to have complete transparency in order to assess whether something is going to be feeding into the criminal justice system.
So, I think here we're sort of looking at a bigger problem, which is this question of how nation states could be regulating these systems and could be, you know, acting as a type of accountability structure.
When we look at the shift in power that's happened in the tech sector in just the last fifteen years – so, historically it is a fraction of time, it's just a blip - but nonetheless, we have now seen technology companies become some of the richest and most powerful companies on the planet, that are in many ways acting as transnational entities. So, Benjamin Bratton uses this term of ‘power states’ - things that are larger than nation states. And in this sense, it makes it extremely difficult, even if you can get the political will to regulate companies like Facebook and Google, how would you actually do it in such a way that these companies wouldn't simply move their data processing offshore in a type of ‘regulation arbitrage’? You know, that is the problem that we face.
So, much of the thinking right now about governance around AI systems is: what we really need is an international structure. But this is happening at the same time, as you say, as a complete loss of faith and support of international institutions like the UN, which have been in many cases really white-anted by particular sort of political agents over the last decade. So, we're in a moment where there's less authority in political institutions, let alone international political institutions, where we would most need to see this type of connective tissue of governance that would apply to everybody.
So we've hit a very difficult moment in history where the things that we need which would require trusted institutions and strong national and international governance, is at a time when we do not have those capacities. And to me, this is the biggest problem that we face, is that the democratic needs that we have, we don't have easy solutions.
Nicole Anderson (VO)
I want to turn to Peter Schlosser, who is the Vice President and Vice Provost of the Julie-Anne Wrigley Global Futures Laboratory at Arizona State University. He’s the University Professor of Global Futures and holds joint appointments in the School of Sustainability, the School of Earth and Space Exploration in the College of Liberal Arts and Sciences, and the School of Sustainable Engineering and the Built Environment in the Fulton School of Engineering.
Nicole Anderson
Peter, do you think citizens are currently mistrustful of our government institutions and corporations, and what does this mean for our democratic processes?
Peter Schlosser
Yeah, I think right now we do have a lot of mistrust in governments of whichever flavor they are and I often think about that in terms of people going into that mode of mistrust when they struggle to lead their life according to the way they would like it to be. In a sense you could say, we are possibly at the beginning of revolution, meaning that people are not agreeing with the way they are governed; with the way their life evolves, and thereby are actually starting to look for their own way out of that situation.
Nicole Anderson
So how then do we get people to recognize or respect basic truths in a world that is often driven by untruthful messaging and by individualism.
Peter Schlosser
It is coming back to that question of trust in a way. How can we get trust back by those who actually have responsibility to make governance structures work? And part of the mistrust is that we, as a society, did not pay enough attention to issues of inequality that lead to many people struggling with their lives; struggling with imagining a future for themselves, also a future for their children where they have options. To reverse that, we’ll probably need some time. If I look at the situation right now I see that there are large fractions of society who simply feel that decisions that are made by their governing structures are not helping them, that in a way they see them as restricting them, and that leads to dissatisfaction. And that of course then leads to opposition, and electing people who in the end will paralyze the governance system as a whole.
Julian Knowles
To what extent do you think that the rise of the internet and social media has had an effect on democracies and this notion of trust, because we're seeing corporations that are larger and more powerful than we’ve ever seen before, that are arguably more powerful than nation states themselves, who are more or less entirely ungoverned. And for whom their business model is increasingly becoming conflict, which they would characterize as a kind of ‘engagement’.
Is this a problem for global democracies? It raises the question of internet governance and whether nation states should work with each other to protect democracy from tech corporations operating platforms for profit in ways that might inherently threaten it?
Peter Schlosser
I think what we are seeing is an increasing disparity between those who have an abundance and those who are struggling to make a regular, a normal life, work for them. And that is the combination of the structures that allow these companies to develop; to flourish, and the drive of these companies to increase profit ever more. So, I think it's a combination of the increasing disparity, together with the easy access to information that does help people to confirm their value system and satisfies their need to get information that confirms that the world is unjust in their view.
Julian Knowles (VO)
So maybe transparency isn’t sufficient to build trust. As Kate Crawford says, we need mechanisms for ‘accountability’, which might then be the basis to develop trust. And accountability not just from governments, but from corporations as well. Nicole Anderson and Adam Nocek.
Nicole Anderson
Adam, this discussion raises another question. How do we get the owners and leaders of the major social media platforms and tech companies to do what they need to do to foster and protect democracy?
Adam Nocek
Yeah, this is a really tricky question, especially today, and probably impossible to answer, but probably to think through this the first thing we have to get clear on is, or at least recognize, is something that people already know to be true, is that tech companies have far too much power. You know, there's an inordinate amount or a concentration of power within tech companies and this is so much so that they operate largely outside of state control, which is one of the reasons we see the emergence of new technological sovereignties that are unevenly affecting global populations. So, you know, one of the things to do to take more responsibility… is that we need to break up these tech monopolies. So that’s one step.
You could say this could lead to the redistribution of power dynamics and so forth, but one of the things I’ve been asking myself is: does this get the job done? And probably not. And look this is like the sort of revolutionary promise of blockchain technologies and Web 3.0, where everyone in some ways owns a piece of the pie. But, one of the things I think we need to be thinking about is, well, I have a really hard time thinking about blockchain technologies and Web 3.0 not actually just monetizing absolutely everything. And so, just because now we all become ‘owners’, and we all become ‘producers’, that doesn't really get us out of the trouble. Now we just become even more entrepreneurial, even more like ‘entrepreneurs of the self’. So, I don't know if that really answers the question, but I do think that these are some of the things that we need to be thinking about.
Nicole Anderson
In the corporate world, even before the tech companies emerged, there was the concept of ‘corporate social responsibility’ - that in order to keep your stock price up, you needed to be seen to be not doing harm. In fact, you needed to be delivering some social good. So how do we raise the issue of corporate social responsibility in relation to tech platforms? How do tech companies account back to this principle?
Adam Nocek
I mean, unfortunately I think it usually comes down to profits, right? If these corporate leaders are answerable only to the shareholders and somehow social responsibility plays into that sort of calculus, then of course they're going to have to take some form of social responsibility. But I think it's always going to be according to: where are the profit margins? Are the greatest profit margins here or there? So, you know, this is a larger infrastructural sort of political economics design question about how do we redesign corporate responsibility, such that they do have to make changes and become accountable to more than just the shareholders.
MONTAGE BEGINS
Journalist
A Big Brother world where your life is captured digitally then the data analyzed to win your vote.
Julian Knowles (VO)
Cambridge Analytica was a British political consulting firm that rose to notoriety via the Facebook–Cambridge Analytica data scandal.
Journalist
Thanks to advances in data analysis and digital communications, it’s now becoming possible for political leaders to capture the expressions of all American minds.
Julian Knowles (VO)
The firm was founded in 2013 as a subsidiary of the global election management agency SCL Group. British businessman and Eton graduate Alexander Nix was appointed as the company’s CEO.
Journalist
Alexander Nix showed me how his firm puts people into groups by psychological characteristics
Julian Knowles (VO)
In 2018, the media ran stories that the company had acquired and used personal data about Facebook users from an external researcher who had told Facebook he was collecting it for academic research purposes.
A resulting investigation found that the company was using the personal data of up to 87 million Facebook users who had used a Facebook app called ‘This Is Your Digital Life’.
Not only did the app harvest the data from users, it became apparent that the app gained access to personal data from the user’s entire friends’ network, the majority of whom had not granted the app access permissions.
Christopher Wylie
My name is Christopher Wylie. I'm a data scientist, and I helped set up Cambridge Analytica
Julian Knowles (VO)
Cambridge Analytica claimed to use the data to perform audience segmentation that became the basis for ‘behavioral micro-targeting’ and persuasion techniques.
Carole Cadwalladr (journalist)
And people had no idea that the data had been taken in this way?
Christopher Wylie
No
Carole Cadwalladr (journalist)
So you’ve harvested my data and then you have used that to target me in ways that I can’t see, and that I don’t understand?’
Julian Knowles (VO)
According to Alexander Nix, Cambridge Analytica ran digital election campaigns for Donald Trump.
Carole Cadwalladr
So you psyops-ed Steve Bannon, basically
Christopher Wylie
In a way, he was, he was a target audience of one
Carole Cadwalladr
And you changed his perception of reality
Christopher Wylie
And we changed his perception of who we were, and what we were doing, and what the situation he was in
Carole Cadwalladr
And then from there it was like you took that to America to change the perception of reality for America
Julian Knowles (VO)
The company also worked on elections in India, Kenya, Mexico, Malta and a range of other countries. Allegations were also made that they worked on the Brexit campaign.
Journalist
In Europe tough privacy laws make some of this kind of data work illegal. And yet many of these techniques pioneered in America were quietly used in last years general election in Britain, and this year’s EU referendum.
Julian Knowles (VO)
In a Channel 4 documentary, secretly filmed video revealed Cambridge Analytica executives saying they had worked on over 200 election campaigns across the world and they had used a number of coercion techniques in their business practices.
Christopher Wylie
It’s incorrect to call Cambridge Analytica a purely sort of data science company. It is a full-service propaganda machine
Julian Knowles (VO)
There are various ways of viewing these kinds of data-based persuasion services, from routine market research, marketing and messaging services, to something far more insidious. Former Cambridge Analytica data scientist Christopher Wylie refers to the weaponization of data.
Christopher Wylie
If you want to fundamentally change society, you first have to break it … This was the weapon that Steve Bannon wanted to build to fight his culture war
Julian Knowles (VO)
Whilst the precise impact of Cambridge Analytica remains debated, the scandal clearly brought to light the way in which personal data from social media platforms were being accessed and used without consent for political purposes by specialist data and election campaign firms.
We asked Adam Nocek about the impact this scandal might have had on public awareness of the use of data brokerage, the kinds of invasions of privacy we’ve seen from corporations, and the big business of political persuasion on social media platforms.
Julian Knowles
Adam, what do you make of the impact of the Cambridge Analytica scandal? Looking back at that moment, do you think there was any shift in public perceptions of social media platforms? Do you think that moment will just have appeared and gone without a trace? Or do you think that that moment will change the course of the social media platforms and the public's attitudes towards them?
Adam Nocek
My concern has to do with the fact that these are big visible cases that are obviously incredibly important and they impact democracy on a large scale, but one of the things that we're not thinking about, and I think we need to be thinking about, are the ways in which these practices are happening all of the time.
This isn't a special moment. And that all of a sudden this is - yeah I mean, there is a kind of digital justice and digital activism, and I think this is really important, that there is a kind of moment of awareness around this - my concern has to do with the fact that we're going to become, yeah, sleepy again or not realize that it's not just one isolated event that we can control, but that similar practices have been happening all over the place. I mean, data brokerage firms exist. This is moving between the corporate and state sectors and sharing data that people can make profits on this kind of brokerage. I think this is not something that's going away and I think the micro-practices are what's really dangerous.
So, I think the visibility is important. It's really good that we're bringing attention to this sort of stuff, of course, because more people are aware of it. My mother now is asking me questions about it and she barely knows how to turn on her iPhone, and I think this is in terms of the sort of social awareness this is important. But, in terms of everyday practices, I don't think that people are paying attention.
Nicole Anderson
Adam, I want to come back to this question of large corporations taking responsibility. Can we simply assume that they will want to take responsibility?
There's a difference between them wanting to take responsibility and them having a form of non-negotiable external governance that requires them take responsibility, because in some sense, if it's about the profit margin, it's not in their interests to actually take responsibility.
In that way, do you think that tech companies and their practices are corroding democracy from within?
Adam Nocek
Yeah, that's really important, because I think when it really comes down to it - and maybe this is part of the moment of thinking about, and thinking through, the consequences or effects of Cambridge Analytica - I think one of the important things that we're realizing now is that we're all training Google's algorithms. You know, the clickbait, for example. I mean, it's not simply clickbait, it's anything you click on. It's not simply about likes and dislikes. It's about training systems. We are doing that kind of training.
And think, you know, this is exploited labor. This is labor for free. And this is happening on a global scale. Earlier when I talked about new technological sovereignties, this is outside of the nation state. This is happening in globally distributed ways that, you know, is there an interest in not training these algorithms for free? I don't think so.
And this is the importance of Kate Crawford's work on the atlas of an AI system. This is the sort of thing that she's talking about when she's talking about these wider AI ecologies and it’s not simply, we're not simply talking about data storage. We're also thinking about data gathering and the workforces and the labor forces and the bodies, and the unpaid labor that goes into actually training these systems.
So, do corporations really want to become responsible for all of the global infrastructure, if you like, that goes into supporting the algorithm and all the voices; the human and non-human voices, that are required to participate to just, you know, for an Amazon Echo to get your order from a fast food joint or something like that? I doubt it.
And so, if you're talking about a radical participatory democracy, I think those are the kinds of questions we need to be asking.
Nicole Anderson
Moving on from corporate responsibility to online conflict, do you think that social media platforms are playing a role in increasing political and ideological divisions and elevating conflict, and if so how?
Adam Nocek
There are a lot of ways to answer this question. I mean, one has to do with big tech and the sort of concentration of power within Silicon Valley and the sort of social imaginary around Silicon Valley and the ways in which these kinds of technological giants have to outsource labor. So, in terms of conflict, yeah of course there's conflict and people have been writing on this in terms of environmental media practices, and so on. So, there are geopolitical conflicts that happen: thinking about mining practices and so forth. And I think those are big infrastructural questions.
I think on a more sort everyday level (when we're talking about social media platforms) these platforms are designed for us to create profiles and to increase our social capital. And it's not simply about liking and not liking things. It's about presenting ourselves to the world in a kind of pornographic way, as Byung-Chul Han (Universität der Künste Berlin) will say, ‘this is a kind of pornographic society we live in’. And I think there's something right about that and I also think there's something right to what Sherry Turkle (MIT) talks about when she mentions we're alone together.
And, you know, we are connecting through social media platforms for instance, but we're doing so in order to generate more forms of human and social capital. So, that itself is a kind of isolating practice and so I think connecting through these forms of increased social capital and becoming better and more productive entrepreneurs of the self, I think this is only going to create division.
This isn't the kind of social participatory action that a democratic society really requires. It creates isolation, individualism. And I think it's a step back and not a step forward.
Nicole Anderson (VO)
Adam Nocek proposes that this race to create human and social capital through social media profiles – what he refers to as an ‘entrepreneurship of the self’ - is inherently isolating, and so leads to conflict. But what about the role of AI in the dynamics and conflicts between nation states? Kate Crawford.
Kate Crawford
You realize how much AI is now being conflated with national military interest. So, that's absolutely happened in China. It's absolutely happened in the US, where if you hear conversations in DC, people are saying ‘oh, you can't regulate American technology companies, because then we simply would be allowing the Chinese to win, or the Russians to win’. And so, any type of controls would create this problematic sort of imbalance of what is currently the ‘AI race’. And I think it's extremely damaging rhetoric. It's created a race to the bottom and it's certainly created this atmosphere where we do not have regulations on things that absolutely should be regulated for just basic questions around: how do we make sure that systems are doing what they say they're doing?
So, we've seen some action and in terms of the EU. The EU has created the first ever draft omnibus bill around AI - the ‘AI Act’ - and it's not perfect by any stretch, but is at least attempting to try and create these hierarchies of concern. So, it has a series of AI applications which are either ‘red’ for being problematic, or they're ‘orange’, which could be problematic depending how it's deployed, or ‘green’, which means you can do this, but be aware that there are potential downsides. It's a start, but it's by no means enough. But even then, if it pertains to the EU, you have to think about the way in which this can play out very unevenly across the geopolitics of AI writ large.
Julian Knowles
Kate, I wanted to ask you a question about the future now.
I love this part of your book because it's very realistic, as opposed to hopelessly optimistic. You say in the conclusion that it's (quote) “tempting to think that we might be able to democratize AI so that it might work in service of the people and not for capital against them. That we might re-orientate it towards justice and equality” (unquote).
But you suggest that the infrastructures and the forms of power that enable, and are enabled by, AI are skewed towards the centralization of control. And you say - I love this quote - “to suggest that we democratize AI to reduce asymmetries of power is a little like arguing for democratizing weapons manufacturing in the service of peace” (unquote). So, that’s an excellent comparison. It really brings the nature of the problem to the foreground, because it's inherently about power and things that can cause harm.
So, my question is – if AI is in service of the accumulation of power, and largely for governments and corporations, to what extent are you optimistic about the future of democracy?
Or do you think we're coming to a point of intractable problems, because AI and these technological meta-entities above nation states are now so powerful that we're not really in control anymore?
Kate Crawford
Well, I think if I wasn't optimistic, I wouldn't get out of bed in the morning to do the work that I do! And, it's certainly, you know, the reason I research the relationships between AI and power is because I think that it is of crucial importance that this is part of the public conversation around how we want democracies to develop.
And we actually have to start being far more critical and skeptical around the way in which we are devolving practices of decision-making and prediction to technical systems that are so prone to error and so prone to, again, accentuating those underlying asymmetries of wealth and power.
And there's many reasons why that happens and, again, part of it is really the way that AI systems are designed and the fact that we really have four companies in the world who run the computational backbone known as the cloud. So, even if we have lots of other companies that are doing things, they're backing onto one of those infrastructures. And you're looking at this really quite profound concentration of power the likes of which we haven't seen since the emergence of the railways, or if you like you know Standard Oil. I mean really, we're just such a small number of companies have all of the levers of power of running core infrastructures that we need for everyday life.
So, in that sense I think we are facing a very real democratic challenge. And I would be foolish to say that I think it's going to be an easy one. I think it's a very real challenge but I do think that the only way that we start to address challenges is by first informing the public - actually making sure that this is a public conversation - that people feel empowered to look at how systems work, to push back and to create zones of refusal.
I mean, one of the things I write about in Atlas of AI is the idea of how do we exercise this muscle that we’re so we are so ill-used to exercising; of refusing and saying ‘no, we don't want a facial recognition system in our school’. ‘No, we don't want this kind of tracking system being used in public housing’. ‘No, you know we don't want to be using predictive policing in our neighborhood’.
And what we've seen over the last just couple of years has been quite extraordinary. And I think, certainly a moment of optimism and hope is seeing how many cities have now passed laws to say ‘we won't allow facial recognition being used on people here by the police’ or ‘we won't allow the use of predictive policing because it has such appalling racialized outcomes’. So, we're starting to see that muscle get exercised but we have a lot more to do, I think.
The other place where I feel a lot of optimism is that, in some ways, because AI touches so many areas of life, you know, it touches how we work, how we go to school; it has enormous environmental impacts that we’re starting to see activists who used to be really working in their own little silos like, just working on labor politics, or working on climate justice, or working on issues around how do we create unions? These issues are now coalescing around AI.
So, you're starting to see sort of a unification of a lot of disparate political movements and I think that's also something that will be very much needed here, because at its root addressing the power imbalances of Artificial Intelligence in the world cannot be done as an individual project, you know, if you stop using Facebook, or if you don't use a smartphone, that's really not going to make a difference. You are still part of these systems and you're still exposed to these types of democratic impacts. So, this is a collective action problem and that means ultimately that we have to be thinking about this collectively and trying to make these changes together. And, of course, that is ultimately always a democratic project.
Nicole Anderson (VO)
Kate Crawford, Adam Nocek, Peter Schlosser - thank you so for such an interesting discussion and for your time today.
In the next episode we will be looking at the role of citizens in strengthening democracy.
WORKS CITED
Ananny, Mike & Crawford, Kate (2016) ‘Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability’, New Media & Society.
Bratton, Benjamin (2021), The Revenge of the Real: Politics for a Post-Pandemic World. Verso Books.
Campolo, Alex & Crawford, Kate (2020), 'Enchanted Determinism: Power without Responsibility in Artificial Intelligence', Engaging Science, Technology, and Society, 6, 1-19.
Crawford, Kate (2021), Atlas of AI. Yale University Press.
Crawford, Kate & Schultz, Jason (2019), 'AI Systems As State Actors', Columbia Law Review, 119(7), 1941-1972.
Han, Byung-Chul (2015), The Transparency Society. Stanford University Press.
Nocek, Adam (2021), Molecular Capture: The Animation of Biology. University of Minnesota Press.
Turkle, Sherry (2012), Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
Wylie, Christopher and Cadwalladr, Carole. Guardian interview https://www.youtube.com/watch?v=FXdYSQ6nu-M
______
Transcript copyright Julian Knowles and Nicole Anderson (2022). All rights reserved. Free for educational use with attribution.