Tech News

The Department of Defense is providing AI ethics guidelines for technology contractors

[ad_1]

The purpose of the guidelines is to ensure that technology contractors adhere to existing DoDs Ethical principles for AI, says Goodman. The DoD announced these principles last year when the Defense Innovation Board was set up in 2016 to bring the Silicon Valley spark to the U.S. military. then. Until September 2020, the committee was served by former Google CEO Eric Schmidt, and current members include Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence lab.

However, some critics question whether the work promises significant reform.

During the study, the committee consulted a number of experts, including voices of critics of the military’s use of AI, such as members of the Killer Robots for Campaign and Meredith Whittaker, a former Google researcher who helped organize protests at Project Maven.

Whittaker, who is currently the associate professor at the AI ​​Now Institute at New York University, was not available for comment. But according to institute court spokeswoman Courtney Holsworth, she went to a meeting where she discussed the direction she was taking with key board members, including Schmidt. “It was never significantly consulted,” Holsworth says. “The claim that it was the same could be read as a form of ethical cleansing, claiming that the presence of opposing voices is used in a small part of a long process to claim the broad participation of relevant stakeholders.”

If the DoD is not open to buying, can its guidelines still help build trust? “Because it seems like a paradoxical idea, there will be people who will never be satisfied with the ethical guidelines that DoD creates,” Goodman says. “It’s important to be realistic about what the guidelines can and can’t do.”

For example, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some campaigners believe should be banned. But Goodman noted that the rules that govern this technology are decided above the chain. The purpose of the guidelines is to facilitate the construction of an AI that complies with these regulations. And part of that process is to explicitly address the concerns of third-party developers. “The valid application of these regimes is to decide not to follow a particular system,” says Jared Dunnmon of the DIU, which he co-authored. “You may decide it’s not a good idea.”

[ad_2]

Source link

Related Articles

Back to top button