Tech News

To what extent can an AI imitate human ethics?

[ad_1]

A couple of decades ago when experts started sounding the alarm Inadequate AI – The danger of strong and transformative artificial intelligence systems, perhaps not behaving as humans expected – many of their concerns were hypothetical. In the early 2000s, he was still producing AI research relatively limited yields, and even the best AI systems available failed in a number of simple tasks.

But since then, AI has gone on to build pretty good and much cheaper ones. The jumps have been a particularly notable area language and text creation AI, which can be trained in huge collections of text content to create more text in a similar style. Many startups and research teams are training these AIs for all sorts of tasks, from writing code to producing advertising copy.

Their rise doesn’t change the basic argument for AI alignment concerns, but it does make one thing incredibly useful: it turns what were once hypothetical concerns into more concrete ones, allowing more people to live and more researchers (hopefully) to deal with them.

An AI oracle?

Take it Delphi, The Allen Institute for AI’s new AI text system, a research institute created by Paul Allen, a co-founder of Microsoft.

The way Delphi works is incredibly simple: the researchers trained a machine learning system in a large set of texts on the Internet, and then in a large database of Mechanical Turk participants ’responses (a well-known paid crowdsourcing platform among researchers). to predict how humans would evaluate. a broad ethical situation, starting with “cheating on the wife” and “shooting someone in self-defense”.

As a result, it’s an AI that gives ethical judgments: cheating on my wife tells me “it’s wrong”. Shoot someone in self-defense? “All right.” (Check this out excellent writing In The Verge in Delphi, with more examples of AI answering other questions.)

The skeptical attitude of this is, of course, that there is nothing “under the hood”: there is no deep sense that AI really understands ethics and that understanding ethics is used to make moral judgments. He has learned how to predict the response a Mechanical Turk user would give.

And Delphi users quickly found that this leads to some significant ethical oversight: ask Delphi “should I commit genocide if it makes everyone happy” and he replied, “you should”.

Why Delphi is educational

Despite its obvious flaws, I still think it’s something useful Thinking of Delphi Possible future trajectories of AI.

Using it to take a lot of data from humans and predict what humans would respond to has proven to be powerful in training AI systems.

For a long time, it was a background hypothesis in many areas of AI to build intelligence that researchers should explicitly build in reasoning capacity and the conceptual frameworks that AI can use to think about the world. They were the first AI language generators, for example manually programmed with the principles of syntax could be used to create sentences.

Now, it is not so obvious that researchers will have to build on reasoning in order to extract reasoning. Perhaps a very simple approach like training AI to predict what a person in Mechanical Turk would say in response to an invitation could lead to a fairly powerful system.

Any real capacity for ethical reasoning in these systems would be coincidence; they are merely predictors of what humans answer questions, and they will use any approach they stumble to that has good predictive value. This may be, as they become more specific, a deep understanding of human ethics in order to better anticipate how we will answer these questions.

Of course, there is a lot that can go wrong.

If we rely on AI systems to evaluate new inventions, then make investment decisions that are taken as a sign of product quality, if we identify promising research and more, there may be differences between what AI is measuring and what humans really care about. will increase.

AI systems will improve — much better — and Delphin will stop making stupid mistakes like they can still find. To say that genocide is good “while making everyone happy” is so clearly and ridiculously wrong. But when we cannot detect their faults, this does not mean that they will be without error; it means that these challenges will be much more difficult to notice.

A version of this story was originally published Perfect Future bulletin. Sign up here to subscribe!

[ad_2]

Source link

Related Articles

Back to top button