Tech News

The Pentagon is on the verge of abandoning weapons to control AI growth

[ad_1]

Last August, several military dozen drones and tank-like robots it went to the skies and roads 40 miles south of Seattle. Their mission: to find terrorists who are suspected of hiding in various buildings.

So many robots took part in the operation that one human operator could not see them all with a good eye. So when they were needed, they were instructed to find — and eliminate — enemy fighters.

The mission was just an exercise Advanced Defense Research Projects Agency, Pentagon Blue Research Division; robots were more deadly than radio transmitters designed to simulate interactions with robot friends or enemies.

The simulation was one of those done to study how it was done last summer Artificial intelligence it can help expand the use of automation in automated systems, including in very complex and rapid scenarios for humans to make all critical decisions. The demonstrations reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it is becoming increasingly clear that machines can overcome them when analyzing complex situations or operating at high speeds.

General John Murray According to the U.S. Army Future Command, a hearing at the U.S. Military Academy last month told the robot groups that military planners, policymakers and society will consider whether a person should make all decisions about using lethal force in newly killed systems. Murray asked, “Is it human capacity to choose what they should be” and then make 100 individual decisions? “Is it necessary for man to have a bow?” he added.

Other comments from military commanders suggest an interest in giving more agencies to autonomous weapons systems. At a conference on Air Force AI last week, Michael Kanaan, MIT’s director of operations for the Air Force Artificial Intelligence Accelerator and chief voice of U.S. Army AI, thought it was evolving. He says AI should identify and differentiate potential targets while human beings make higher decisions. “I think that’s where we’re going,” Canaan says.

In the same ceremony, the lieutenant major Clinton HinoteThe Pentagon’s deputy chief of staff for strategy, integration and demands says that whether a person can remove the loop of a deadly autonomous system is “one of the most interesting debates to come, [and] has not yet been fixed “.

A report this month, the National Artificial Intelligence Security Commission (NSCAI), an advisory group set up by Congress, recommended, among other things, that the U.S. call for an international ban on the development of autonomous weapons.

Timothy ChungAccording to the head of the Darpa swarming project, last summer’s exercises were designed to examine when a human drone operator should make decisions for autonomous systems and not. For example, in the face of attacks on various fronts, human control can sometimes interfere with the mission because people cannot react quickly enough. “Actually, systems can do better if someone doesn’t intervene,” Chung says.

Drones and wheeled robots, the size of a large backpack, gave them an overall goal and then touched on AI algorithms to come up with a plan to achieve this. Some of them surrounded the buildings and others carried out surveillance sweeps. Some were destroyed by simulated explosives; some identified beacons representing enemy fighters and chose to attack.

The US and other nations have used autonomy in weapons systems for decades. Some missiles, for example, can autonomously identify and attack enemies in a particular area. But rapid advances in AI algorithms will change how the military uses these systems. Shelf AI code, capable of controlling robots and identifying milestones and targets, often highly reliable, will allow the system to deploy further in a wider range of situations.

But as drone demonstrations have pointed out, the more widespread use of AI will sometimes make it harder for humans to keep in the loop. This can be problematic because of AI technology biases can save or have unpredictable behavior. A trained visual algorithm that knows a particular uniform may go wrong with someone wearing similar clothing. Chung says the swarm project assumes that AI algorithms will be improved to identify enemies with reliable reliability.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button