Tech News

AI has written better phishing emails than humans in a recent test

[ad_1]

Natural language processing continues to find its way in unexpected corners. This time, it is phishing emails. In a small study, researchers found that they can use the GPT-3 deep language learning model, along with other service platforms like AI, to launch campaign campaigns on a massive scale to significantly lower the entry barrier.

Researchers have long debated whether scammers were worth the effort to learn machine learning algorithms, which could then lead to compelling phishing messages. Mass phishing messages are simple and formulated, after all, and are already very effective. Highly targeted and tailored “spearphishing” messages require more work, though. That’s where NLP can be so amazing.

This week at the Black Hat and Defcon security conferences in Las Vegas, a team from the Singapore Government Technology Agency presented a final experiment in which they crafted phishing emails and sent 200 others created by AI-aaa-service platforms. of their colleagues. Both posts had links that weren’t really insidious, but they pushed back the click-through rate for researchers. They were surprised that more people clicked on the links in the posts created by the AI ​​than the ones written by humans — with a noticeable difference.

“Researchers have stated that AI requires some kind of knowledge. Millions of dollars are needed to train a very good model,” says cyber security specialist Eugene Lim of the Government’s Technology Agency. “But once put in place as an AI service, it costs a couple of cents and is very easy to use – enter text, write via text. You don’t even have to run the code, it will prompt you and give you output. So that reduces the entry barrier for the public. it increases the potential targets for much larger and spearphishing. All of a sudden, all mass-scale emails can be customized for each recipient. “

The researchers used OpenAI’s GPT-3 platform in conjunction with other services such as identity-based AI to create phishing emails tailored to colleagues ’backgrounds and characteristics. Machine learning based on personality analysis aims to predict a person’s tendency and mindset based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that refined and refined the emails before they were sent. They say the results seemed “strangely human” and the platforms automatically provided amazing features, such as the mention of the Singapore law, when it was ordered to create content for people living in Singapore.

Although they were impressed with the quality of the synthetic messages and the number of clicks they received from colleagues made up of humans, the researchers warned that the experiment was only the first step. The sample size was relatively small and the target set was relatively homogeneous in terms of employment and geographic region. Moreover, both the man-made messages and the AI-aa-service pipelines were created by the inside of the office, rather than by outside attackers trying to strike the right tone from a distance.

“There are a lot of variables to consider,” says Tan Kee Hock, a cybersecurity specialist at the Government Technology Agency.

However, the findings prompted researchers to think more deeply about how services as AI services can play a role in phishing and spearphishing campaigns. OpenAI itself, for example, is long overdue he was afraid of potential for misuse its own service or others like it. Researchers have warned that scrupulous service providers like himself and other AI have clear codes of conduct, try to inspect their platforms for potentially harmful activities, or even try to verify users ’identities to some extent.

“Misuse of language models is a problem across the industry, and we take it very seriously as part of our commitment to the safe and responsible deployment of AI,” OpenAI told WIRED in a statement. “We authorize access to GPT-3 through our API, and review all production uses of GPT-3 before launch. We implement technical measures such as rate limits to reduce the likelihood and impact of harmful use of API users. Our active control systems and audits they are designed to bring to light possible evidence of misuse as soon as possible, and we are constantly working to improve the accuracy and effectiveness of our security tools. “

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button