Tech News

RE: WIRED 2021: Timnit Gebru says artificial intelligence needs to slow down

[ad_1]

Artificial intelligence researchers they are facing a problem of responsibility: how do you try to make sure that decisions are responsible when the decision is made author he is not a responsible person, but rather algorithm? Right now, only a few people and organizations have the power to automate decision-making — and the resources.

Organizations have confidence ERA to accept a loan or to adjust the defendant’s sentence. But the foundations on which these intelligent systems are built can be biased. Biasing the data bases of programmers and a strong company can have unintended consequences for snowballs. This is the reality that AI researcher Timnit Gebru warned in a RE: WIRED lecture on Tuesday.

“Companies were supposed to be there [to assess] the likelihood that someone will decide a crime again, ”Gebru said. “That was scary for me.”

Gebru was a star engineer at Google who specialized in AI ethics. He led a group responsible for guarding against the algorithm of racism, sexism and other biases. Gebru also founded the non-profit organization Black in AI, which seeks to improve the inclusion, visibility and health of Blacks in its field.

Last year, Google forced it out. But he has not given up his fight to avoid the unwanted damage of machine learning algorithms.

On Tuesday, Gebru spoke with WIRED CEO Tom Simonite about the incentives for AI research, the role of staff protections, and the vision of an independent institute planned for AI ethics and responsibility. Its main point: AI needs to slow down.

“We haven’t had time to think about how to build it, because we’re always putting out fires,” he said.

Going to a public school in the Boston suburbs as an Ethiopian refugee, Gebru quickly became acquainted with American racial dissonance. The talks referred to racism in the past, but that didn’t match what he saw, Gebru said to Simon at the beginning of the year. He has repeatedly found similar mistakes in his technology career.

Gebu’s professional career began in hardware. But he changed course when he saw the barriers to diversity and began to suspect that most AI research had the potential to harm groups that were already excluded.

“The confluence of this prompted me to move in a different direction, which is to try to understand and limit the negative social impacts of AI,” he said.

For two years, Gebruk led Google’s Ethical AI team alongside computer scientist Margaret Mitchell. The team created tools to protect AI from disasters for Google’s product teams. Over time, however, Gebru and Mitchell realized that they were being left out of meetings and email wires.

In June 2020, the GPT-3 language model was released and sometimes showed the ability to work on consistent prose. But Gebrru’s team was worried about the excitement around him.

“Let’s build bigger and bigger language models,” Gebru said, recalling popular sentiment. “We had to say, ‘Please let us stop and calm down for a second so we can think of the pros and cons and maybe alternative ways to do this.'”

His team helped write a paper on the ethical implications of language models, “On the dangers of stochastic parrots: can language models be too large?”.

Others at Google were not happy. Gebruri was asked to drop the paper or remove the names of Google employees. Asking for transparency, he responded: Moe he demanded such hard action and why? Both sides did not move. Gebru learned from his live report that he had “resigned.”

[ad_2]

Source link

Related Articles

Back to top button