AI could soon write code based on common language
[ad_1]
In recent years, used by researchers Artificial intelligence ra improve translation between programming languages or automatically solve problems. The DrRepair AI system, for example, has been shown to solve most of the problems that create error messages. Some researchers dream of a day when they can write programs based on simple descriptions of non-AI experts.
On tuesday Microsoft and OpenAI shared plans to move GPT-3, one of the world’s most advanced text creation models, to programming based on natural language descriptions. It is the first commercial application for GPT-3 since Microsoft invested $ 1 billion in OpenAI last year and obtained exclusive licensing rights for GPT-3.
“If you describe what you want to do in natural language, GPT-3 will create a list of the most important formulas to choose from,” Satya Nadella, CEO of Microsoft, said at the company’s Build developer conference keynote address. “The code is written by itself”.
Charles Vamanna, Microsoft VP, told WIRED that the sophistication offered by GPT-3 empowers people to face complex challenges and empower people with little coding experience. GPT-3 will return the natural language to PowerFx, a fairly simple programming language similar to the Excel commands that Microsoft introduced in March.
This is the final demonstration of applying AI to coding. Last year at Microsoft’s Build, Sam Altman, CEO of OpenAI demoed A language model fitted with GitHub code that automatically generates lines of Python code. As WIRED determined last month, startups like SourceAI are also being used To create GPT-3 code. IBM last month unveiled its Project CodeNet, with 14 million code samples in more than 50 programming languages, to reduce the program from one year to a month to update the program with millions of Java line codes for the automaker.
New feature in Microsoft neural network An architecture known as Transformer, which is used by large technology companies Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using online text training data. These language patterns are constantly increasing. The largest version of Google’s BERT, the language model released in 2018, had 340 million parameters, a set of neural network constructs. The GPT-3, which was released a year ago, has 175 billion parameters.
Efforts like this have a long way to go, however. In a recent study, the best models achieved only 14% of the input programming challenges gathered by a team of AI researchers.
[ad_2]
Source link