Tech News

Language models like GPT-3 can predict a new type of search engine

[ad_1]

It has now been published by a team of Google researchers a proposal for a radical reshuffle which throws away the classification approach and replaces it with a single large AI language model, such as BERT or GPT-3“Or future versions of them.” The idea is that instead of searching for information on an extensive list of web pages, users would ask questions and have the language model trained on those pages answered directly. Views can change not only how search engines work, but what they do and how we interact with them.

Search engines have become faster and more accurate, even as the network has exploded in size. AI to classify results and Google uses BERT to understand search queries better. Under these changes, all major search engines still function as they did 20 years ago: web pages are indexed by browsers (software that keeps the website constantly reading and keeps a list of everything it finds), results that match a user’s query. collected from that index, and the results are classified.

“This level-level plan has been tested over time and has rarely been challenged or seriously reconsidered,” wrote Donald Metzler and colleagues at Google Research.

The problem is that even the best search engines today still respond with a list of documents that contain the required information rather than information. Search engines are also not good at answering questions that require answers from multiple sources. It’s like asking a doctor for advice and reading a list of articles instead of the correct answer.

Metzler and his colleagues are interested in a search engine that acts like a human expert. He should create answers in natural language, synthesized in more than one document, and he should protect them with references that support his answers, such as Wikipedia articles.

Great language models take us there. Trained in most networks and in hundreds of books, GPT-3 extracts information from multiple sources to answer questions in natural language. The problem is that it does not track these sources and cannot provide evidence of the answers. It is not known whether GPT-3 extracts reliable information or misinformation or throws out nonsense on its own.

Metzler and his colleagues call language models dilettantes – “It’s perceived that they know a lot, but their knowledge is deep.” The solution they claim is to build and train future BERTs and GPT-3s to keep records of where their words come from. It is not yet possible to make such a model, but it is possible in principle, and there is initial work in that direction.

Many decades have been advanced in different areas of research, ranging from responding to queries to summarizing documents, says Ziqi Zhang of the University of Sheffield (UK), which examines the retrieval of information online. But none of these technologies have reviewed the search, because each addresses specific issues and is not general. The exciting premise of this article is that great language models are able to do all of these things at once, he says.

However, Zhang warns that language models do not work well with technical or specialized topics because there are fewer examples in the text they are trained in. “There are probably hundreds of times more data in e-commerce on the net than data on quantum mechanics,” he says. Nowadays language patterns are also skewed into English, which would leave out non-English websites in part.

However, Zhang is happy with the idea. “It’s not been possible in the past because big language models came out recently,” he says. “If it works, it would transform the search experience.”

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button