In 1998 a couple of Stanford graduate students published a paper describing a new kind of search engine: “In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems.”
The key innovation was an algorithm called PageRank, which ranked search results by calculating how relevant they were to a user’s query on the basis of their links to other pages on the web. On the back of PageRank, Google became the gateway to the internet, and Sergey Brin and Larry Page built one of the biggest companies in the world.
Now a team of Google researchers has published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model—a future version of BERT or GPT-3. The idea is that instead of searching for information in a vast list of web pages, users would ask questions and have a language model trained on those pages answer them directly. The approach could change not only how search engines work, but how we interact with them.
Many issues with existing language models will need to be fixed first. For a start, these AIs can sometimes generate biased and toxic responses to queries—a problem that researchers at Google and elsewhere have pointed out.
A team of Google researchers has published a proposal for a radical redesign that throws out the ranked search results approach and replaces it with a single large AI language model, such as BERT or GPT-3. https://t.co/gDsgICFrX8
— MIT Technology Review (@techreview) May 16, 2021