Tech News

This researcher says that AI is neither Artificial nor Intelligent

[ad_1]

Technology companies like it portray Artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is a mistake. In his book AI atlas, visits a lithium mine Amazon a warehouse, and XIX. an archive of the 19th century phrenological skull to illustrate the bad sciences based on natural resources, human sweat, and some versions of technology. Crawford, professor and researcher at the University of Southern California Microsoft, says that many applications and side effects of AI require urgent regulation.

Crawford recently discussed these issues with WIRED lead writer Tom Simonite. An edited transcript follows.

CHANNELED: Few people understand all the technical details of artificial intelligence. You argue that some experts working in technology have a deeper understanding of AI.

KATE CRAWFORD: It is presented as an ethereal and objective way of making decisions. We can start by teaching children and deciding who gets the bond. But the name is misleading: AI is neither artificial nor intelligent.

AI is made up of a large amount of natural, fuel and human resources. And he is not intelligent in the ways of the human mind. He is not able to detect things without much human training, and he has a completely different statistical logic to know how the meaning is made. Since the beginning of IA in 1956, we have made this horrible mistake, a kind of original sin, because the minds are like computers and vice versa. We assume that these things are an analogy of the human mind and that nothing could be further from the truth.

You will take that myth by showing how AI is constructed. Like many industrial processes, it is confusing. Some machine learning systems are built with data collected in a hurry, which can lead to problems like face recognition services for those with more errors in minorities.

We need to look at the production of artificial intelligence from the nose. The seeds of the data problem were planted in the 1980s, when it became commonplace to use data sets without knowing what was inside or concerns about privacy. It was just “raw material” that could be reused in thousands of projects.

This became an ideology for extracting massive data, but data is not an inert substance, it always involves context and politics. Reddit’s sentences will be different from those that appear in children’s books. The images in the Mugshot database have a different history of Oscars, but they are all used equally. This causes a lot of problems down the river. In 2021, there are still no industry standards for noting what types of data are in training sets, how they were obtained, or potential ethical issues.

The roots of emotion recognition software go back to the questionable sciences funded by the Department of Defense in the 1960s. A latest review Among more than 1,000 research papers, it has not been found that a person’s emotions can be reliably inferred from his or her face.

Emotion detection represents that technology will eventually answer our questions about human nature that are not technical questions at all. This idea, which is so debatable in the field of psychology, is a simple theory that adapts to the tools that made the leap to machine learning. Recording people’s faces and relating this to predefined emotional situations works with machine learning, if you leave the culture and context in place and change the way you look and feel hundreds of times a day.

This also becomes a loop of opinion: because we have the tools to perceive emotions, people say we want to apply them in schools and courts and we want to catch potential thefts. Lately companies have been using the pandemic as an excuse for emotion perception in school children. This brings us to the phrenological past, the belief that you can recognize the character and personality from the face and shape of the skull.

By Cath Muscat

You contributed to the recent growth in research that may have unintended effects on AI. But this area is tied to the people and funding in the technology industry who want to benefit from AI. Google has recently forced two researchers to respect AI ethics, Timnit Gebru and Margaret Mitchell. Does the involvement of the industry limit research to question AI?

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button