Researchers Find ChatGPT’s New Search Tool Often Gets It Wrong

Researchers Find ChatGPT’s New Search Tool Often Gets It Wrong

If you use ChatGPT to find out sources of specific quotes, you should pay attention that it often gives incorrect information when try to identify the original source.

This fact is based on specific research which was done to see if ChatGPT Search could accurately show users correct source. There were three different groups articles published by different online publishers. The first group were partners with OpenAI, which is a company behind ChatGPT. The second group was involved in lawsuits against OpenAI. The third included publishers who either allowed or blocked ChatGPT’s content-scraping tool. The research included 10 articles from each publisher and chose random quotes. These quotes when searched on traditional search engines like Google or Bing, would show the original article in the top three results.

Unfortunately, ChatGPT got things wrong 153 times. But that’s not all. ChatGPT hardly ever admitted it didn’t know the answer. Only seven times said it couldn’t find the exact article or used cautious language like “it’s possible” or “it might be.”

This kind of incorrect response can be harmful. For example, in one test, ChatGPT linked a Time magazine story to the Orlando Sentinel article. In another case, a third-party site that had copied a New York Times article was claimed to be original author, instead of pointing directly to the Times itself. Such mistakes could damage trust in both ChatGPT and the publishers.

OpenAI tried to defend themselves by explanation that the researchers used specific approach which real users won’t use, and considered that the test was “atypical”. They also said that OpenAI plans to keep improving how ChatGPT creates search results. Until then, this technology due to its imperfection should be used cautiously.

Read the original artcle