Tech News

This agency wants to know exactly how much you trust AI

[ad_1]

Assistant Professor Himabindu Lakkaraju of Harvard University examines the role of trust in human decision-making in professional fields. He is working with nearly 200 doctors in Massachusetts hospitals to understand how AI’s confidence can change how doctors diagnose a patient.

For common diseases like the flu, AI is not very helpful because it can be detected by human professionals quite easily. Lakkaraju found that AI can help doctors diagnose diseases that are difficult to identify, such as autoimmune diseases. In his latest work, Lakkaraju and colleagues provided doctors with records of 2,000 patients and predictions of an AI system, and then asked them to predict whether the patient would have a stroke within six months. They changed the information provided about the AI ​​system, including accuracy, confidence interval, and an explanation of how the system works. They found that the doctors ’predictions were the most accurate when they were given the most information about the AI ​​system.

Lakkaraju said he is pleased to see that NIST is trying to quantify trust, but says the agency should consider the role that agencies can play in human trust in AI systems. In the experiment, the accuracy of doctors predicting strokes dropped when doctors were given an explanation without data to inform them of the decision, which means that explanations alone can lead to people trusting AI too much.

“Explanations can create unusual confidence even when it’s not guaranteed, it’s a recipe for problems,” he says. “But when you start putting numbers into how good the explanation is, people’s confidence is slowly calibrated.”

Other nations are also trying to address the issue of AI trust. They are among the 40 countries that joined the US AI principles those that emphasize reliability. A document signed by a dozen European countries says that reliability and innovation go hand in hand, and can be playing “Both sides of the same coin.”

NIST and the OECD, a group of 38 countries with advanced economies, are working on tools to designate AI systems as high-risk or low-risk. It was created by the Canadian government evaluation of the impact of the algorithm process in 2019 for business and government agencies. There, AI is divided into four categories: from having no impact on people’s lives or the rights of communities to very high risk and to the detriment of individuals and communities. Assessing an algorithm takes about 30 minutes. The Canadian approach should inform developers to all users except the least risk systems.

EU parliamentarians are being scrutinized AI regulations this can help define global standards and technology for the type of AI that is considered low or high risk. As a European landmark GDPR privacy law, the EU’s AI strategy could lead to the world’s largest companies using artificial intelligence changing practices around the world.

The regulation requires the creation of a public register of high-risk AI forms used in the database managed by the European Commission. Examples of high-risk AI contained in the document are AI used as a security component for education, employment, or services such as electricity, gas, or water. This report is likely to change before the passage, but the draft calls for a ban on the AI ​​government for the social score of citizens and real-time facial recognition.

The EU report also encourages companies and researchers to experiment in areas called “sandboxes”, designed to ensure that the legal framework is “respectable for innovation, future proof and sustainable”. Earlier this month, the Biden administration Enter The National Artificial Intelligence Research Resource Group aimed to share government data to conduct research on issues such as health care or self-driving. Final plans would require Congressional approval.

Currently, AI users are developing a trust score for AI professionals. Over time, however, scores can enable individuals to avoid unreliable AI and push the market to expand robust, proven, and reliable systems. Of course, they know they use AI.


More great KABEKO stories

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button