Tech News

Now for the last AI trick: Writing a computer code

[ad_1]

It can take years to learn how to write computer code well. SourceAI, A Parisian startup believes that programming shouldn’t be such a big deal.

The company is adjusting a tool it uses Artificial intelligence code to write code based on a brief description of what to do. Tell the company tool to “multiply the two numbers given by a user,” for example, and explain a dozen lines in Python for that.

SourceAI’s intentions are a sign of a broader revolution in software development. Advances in machine learning have made it possible to automate an increasing number of coding tasks, automatically complete code segments, and adjust algorithms to search for source code and find sharp errors.

Automating coding can change software development, but the limitations and blind spots of modern AI can create new problems. Machine learning algorithms can behave unpredictably, and code generated by a machine can store harmful errors, unless carefully analyzed.

SourceAI and other similar programs aim to take advantage of the powerful AI language program announced by GPT-3 in May 2020. OpenAI, A San Francisco-based company focused on making basic advances to AI Francisco. The founders of SourceAI were among the first to gain access to GPT-3. OpenAI has not released code for GPT-3, but it does allow some users to access the model via the API.

The GPT-3 network measure is a huge artificial network, trained in large amounts of text taken from the network. It doesn’t capture the meaning of that text, but it can be well enough for language models to create articles on a particular topic, summarize short writing, or answer questions about the content of documents.

“While we were testing the tool, we realized that it could create code,” says Furkan Bektes, founder and CEO of SourceAI. “That’s when we had the idea to develop SourceAI.”

He wasn’t the first to notice the potential. Shortly after the release of GPT-3, a programmer has shown that it can create custom web applications, including buttons, text entry fields, and colors, mixing parts of the fed code. Another company, Debuild, intends to market the technology.

SourceAI aims to allow its users to create a wider range of programs in a wider range of languages, thus helping to automate the creation of more software. “Developers will save time on coding, even people with no coding knowledge will be able to develop applications,” says Bektes.

Another company, TabNine, Used the previous version of the OpenAI language model, the GPT-2 version released by OpenAI, to build a tool that allows a developer to automatically complete a line or function when it starts writing.

Some giant software also seems to be interested. Microsoft It invested $ 1 billion in OpenAIn 2019 and has agreed to issue a GPT-3 license. In the software giant Build the conference in May, Sam Altman, founder of OpenAI, showed how the GPT-3 can compose code automatically for a developer. Microsoft declined to comment on how it could use AI in its software development tools.

Brendan Dolan-Gavitt, An assistant professor in the NYU Department of Computer Science and Engineering, says language models like GPT-3 are likely to be used to support human programmers. Other products will use patterns “to identify errors in your code as you type, looking for things that are“ amazing ”to the language pattern,” he says.

Creating AI can be a problem in creating and analyzing code, however. In a paper published online in March, MIT researchers has been shown to be verified by a trained AI program this code can be tricked into running safely by making certain changes carefully, replacing certain variables to create a malicious program. Shashank Srikant, a PhD student related to the work, says that AI models should not be overly trusted. “Once these models go into production, things can turn out pretty quickly,” he says.

Dolan-Gavitt, a NYU professor, says the nature of the language models used to create coding tools also creates problems. “I think using language models directly creates errors and would probably create insecure code,” he says. “After all, they are trained in human-written code, which is very often flawed and unsafe.”



[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button