Tech News

Stop talking about AI ethics. It’s time to talk about power.

[ad_1]

In his 20-year career, Crawford has struggled with the real-world effects of large-scale data systems, machine learning, and artificial intelligence. In 2017, along with Meredith Whittaker, she founded the AI ​​Now research institute as the first organization focused on studying the social implications of these technologies. He is currently a professor at USC Annenberg in Los Angeles, and a visiting professor of AI and justice at the École Normale Supérieure in Paris, as well as a senior researcher at Microsoft Research.

Five years ago, Crawford says, he was still working on presenting data and neutral AI that was not a mere idea. Now the conversation has evolved, and AI ethics has flourished in its realm. He hopes his book will help him mature even more.

I sat down with Crawford to talk about his book.

This has been edited for length and clarity.

Why did you choose to do this book project and what does it mean to you?

Crawford: So many of the books that have been written about artificial intelligence talk about very narrow technical achievements. Sometimes they write about the great men of AI, but that’s really all we have to fight for is what artificial intelligence really is.

I believe that this very distorted understanding of artificial intelligence has emerged as a purely technical system that is somehow objective and neutral, and as Stuart Russell and Peter Norvig say. their textbook—They are the intelligent agents who make the best decision in any action possible.

I wanted to do something very different: to understand how artificial intelligence is made in the broadest sense. This means seeing the natural resources that drive them, the energy they consume, the labor hidden throughout the supply chain, and the huge amount of data that comes out of all the platforms and devices we use every day.

In doing so, I wanted to open up that understanding of AI, neither artificial nor intelligent. Yes the opposite artificial. It comes from the most material parts of the earth’s crust and the human bodies that work it, and it comes from all the artifacts we create, say, and photograph every day. He’s not smart either. I believe that there is this great original sin in the field, because people believed that computers are somehow like human brains and if we train them like children, they will slowly become these supernatural beings.

I think it’s really problematic, which is that we’ve bought this idea of ​​intelligence, that we’re actually looking at forms of statistical analysis of scale that have as many problems as the data provided.

Did you immediately notice that people should think like that about AI? Or was it a trip?

It’s been a journey to the fullest. I would say that one of the turning points for me was in 2016, “when I started a project called”Anatomy of an AI system”With Vladan Joler. We met at a conference on voice-enabled AI, and we were trying to effectively draw what is needed to make Amazon Echo work. What are the ingredients? How does it extract data? What are the layers of the data pipe?

We realized, well, to actually understand that, you have to understand where the ingredients are coming from. Where did the chips come from? Where are the mines? Where does it melt? Where are the logistics and supply chain pathways?

Finally, how do you perceive the end of the life of these devices? How can we see where e-waste tips are in places like Malaysia, Ghana and Pakistan? We finally completed this two-year research project to actually trace these material supply chains from cradle to grave.

As you begin to study AI systems on this larger scale and over that longer time horizon, you change from these very narrow accounts of “AI fairness” and “ethics”: they are systems that produce profound and lasting geomorphic changes. our planet, as well as increasing the forms of labor inequality we already have in the world.

This made me realize that from the analysis of a single device, Amazon Echo, I had to move on to applying this type of analysis to the entire industry. For me that was the main task, and that’s why AI atlas it took him five years to write. We need to see what these systems really cost us, as we rarely do the work of understanding the true planetary implications.

Another thing I would say has been a real inspiration is that there are more and more scholars who are asking these bigger questions about work, data, and difference. Here I’m thinking of Ruha Benjamin, Safiya Noble, Mar Hicks, Julie Cohen, Meredith Broussard, Simone Brown – the list goes on. I contribute to this body of knowledge by bringing perspectives that link the environment, labor rights and data protection.

You travel a lot throughout the book. You start looking at almost every chapter around you. Why was this important to you?

It was a very conscious choice to focus AI analysis on specific places, to move away from these abstract “now” algorithmic spaces, where there is a lot of discussion about machine learning. And fortunately, when we don’t do that when we talk about these “nowhere” spaces of algorithmic objectivity, it also emphasizes that this is a political option, and it has branches.

When it comes to coiling locations together, that’s why I started really thinking about that metaphor for an atlas, because atlases are unusual books. They are books to open and look at the scale of an entire continent, or you can zoom in and see a mountain range or a city. They give you a change of perspective and a change of scale.

There’s this beautiful line I use in physicist Ursula Franklin’s book. Map writes about how he writes in these methods of collective vision that combine knowledge and the unknown. So for me, it was based on my knowledge, but also on the actual locations that AI is building with rocks, sand, and oil.

What kind of opinion has the book received?

One of the things that surprised me in the initial responses is that people really feel like this kind of perspective was delayed. It’s time to acknowledge that we need to have different types of conversations that we’ve been having in recent years.

We’ve spent too much time focusing on the tight technical fixes of AI systems and always focusing on technical responses and technical responses. Now, we need to address the environmental footprint of the systems. We need to address the real forms of labor exploitation that are taking place in the construction of these systems.

And we’ve also now begun to see the toxic legacy of what’s going on when you extract as much data as possible from the Internet and call it the basic truth. This problematic structure of the world has caused so much damage, and as always, communities that were already marginalized and did not experience the benefits of these systems have felt this damage the most.

What do you expect people to start doing differently?

I hope it will be much harder to have these secret conversations where there have been terms like “ethics” and “AI for good”. so completely denatured by its true meaning. That leaves the curtain aside, and let’s just say who’s really looking at who’s using the levers on these systems. This means shifting from focusing on things like ethical principles to talking about power.

How to move away from this ethical framework?

If there has been a real trap in the technology sector over the last decade, the theory of change has focused on engineering. It has always been, “If there is a problem, there is a technological solution.” And we’ve recently started to see that spread: “Well, if there’s a problem, then regulation can fix. Politicians have a role to play. “

But I think we need to expand that even further. We also need to say this: Where are the civil society groups, where are the activists, where are the climate justice, labor rights, advocates dealing with data protection issues? How do we incorporate them into these discussions? How do we reach out to affected communities?

In other words, how can we conduct this deeper democratic dialogue in the main ways that these systems do not already live outside the democratic rules and oversights that affect the lives of billions of people?

In that sense, this book is trying to decentralize technology and start asking bigger questions around: in what world do we want to live?

What a world he makes you do you want to live What kind of future do you dream of?

I want to see groups working hard to address issues like climate justice and labor rights come together, and I’ve come to realize that relatively separate frontiers for social change and racial justice have shared concerns and shared territory based on them. to coordinate and organize.

Because we are studying this for a very short period of time. We are dealing with a planet that is already in serious tension. We are studying a deep concentration of power in a few hands. You really should go back to the early days of railroads to see another industry that is so concentrated, and now you can say that technology has overtaken that.

So we need to fight for ways to pluralize our societies and have greater democratic responsibilities. And that is a problem of collective action. It is not a problem of individual choice. It’s not the most ethical technology brand as we choose from the shelf. It’s about finding ways to work together on these global challenges.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button