The Pentagon is strengthening AI systems by hacking itself

[ad_1]
The Pentagon sees it Artificial intelligence as a way to overcome, maneuver, and subdue future enemies. But the fragile nature of AI means that without proper care, technology can provide a new way to attack enemies.
The Joint Artificial Intelligence CenterThe Pentagon, created by the U.S. military to make use of AI, has recently created a unit to collect, distribute, and distribute Department of Defense teams as veterans for open source and machine learning. Part of that effort poses a key challenge with the use of AI for military purposes. Anyone who is automatically learning the “red team” known as the Test and Assessment Team will look at vulnerabilities in a pre-trained model. Another cybersecurity team analyzes AI code and data for hidden vulnerabilities.
Machine learningThe technique behind modern AI is a different way of writing computer code, often more powerful. Instead of writing the rules to be followed by a machine, machine learning creates its own rules by learning from the data. The problem is, this learning process, along with artifacts or errors in training data, can make AI models behave in strange or unpredictable ways.
“In some applications, machine learning software is millions of times better than traditional software,” says Gregory Allen, JAIC’s director of strategy and policy. But, he added, machine learning “also breaks down in different ways than traditional software.”
For example, an algorithm with automatic training to detect certain vehicles in satellite imagery can learn to associate the vehicle with a particular color of the surrounding landscape. An opponent can trick AI into changing the landscape around their vehicles. By acquiring data about the workout, the opponent can also plant images that would confuse the algorithm, such as a particular symbol.
Allen says the Pentagon is following suit strict rules on reliability and safety of the software it uses. He says the approach can be extended to AI and machine learning, and noted that he is working to update DoD standards on JAIC software to include issues related to machine learning.
AI is transforming the way certain companies operate because it can be an efficient and powerful way to automate tasks and processes. Instead of writing on it algorithm to predict which product the customer will buy, for example, a company may have an AI algorithm to look at previous thousands or millions of sales and guess its own model for predicting who will buy it.
The U.S. and other military see similar advantages, and are in a hurry to use AI to improve logistics, intelligence gathering, mission planning, and weapons technology. China’s growing technological capabilities have sparked a sense of urgency to take on AI at the Pentagon. Allen says the DoD moves in a “responsible way that prioritizes safety and reliability”.
Researchers are developing more and more creative ways to hack, overturn or break AI systems in the wild. October 2020, showed by Israeli researchers which carefully crafted images can be confused with the AI algorithms that allow a Tesla to interpret the previous path. This “counterattack” involves adapting the input to the machine learning algorithm to find small changes that cause major errors.
The song of the dawn, A professor at UC Berkeley, who has conducted similar experiments with Tesla sensors and other IA systems, said attacks on algorithms for machine learning are already a problem for detecting fraud. Some companies provide tools to test AI systems used in finance. “Naturally, there is an attacker who wants to avoid the system,” he says. “I think we’ll see more of this kind of issue.”
A simple example of an attack on automatic learning is Tay, Microsoft’s wrong chatbot, which debuted in 2016. Bot used an algorithm that learned to respond to new queries by analyzing previous conversations; Redditors quickly realized that they could exploit it this is to throw Tayek hate messages.
[ad_2]
Source link