Reviews on the accuracy of AI Overviews by Google raise concerns due to numerous errors
An analysis of Google’s AI Overviews technology revealed that despite its high overall accuracy, the system often generates incorrect answers and relies on questionable sources, which can lead to manipulation. This research was conducted by The New York Times and the startup Oumi.
AI Overviews, a feature introduced by Google to automatically generate short answers to search queries, demonstrates an accuracy of about 91% after the update to the Gemini 3 model in 2026. However, given the volume of queries processed, even a 9% error rate equals tens of millions of false answers per hour.
Moreover, more than half of the responses the system deems correct are characterized as “unsubstantiated.” This means that the sites AI Overviews references do not always confirm the provided information. Particularly worrisome is that a number of answers are based on data from social media platforms such as Facebook and Reddit, which ranked second and fourth in terms of citation frequency.
Besides issues with sources, AI shows significant difficulties in interpreting and presenting reliable information. Examples include incorrect dates in response to the year of the Bob Marley Museum’s opening and false claims about cellist Yo-Yo Ma.
The system’s susceptibility to intentional manipulation is also concerning. During an experiment, it was found that distorted content could misleadingly alter AI responses, demonstrating the system’s vulnerability to misinformation.
Google noted that it warns about possible errors in AI Overviews responses and advises users to verify information themselves. Meanwhile, Google is somewhat skeptical about the research, noting that it does not fully reflect real user search queries.
| Accuracy | Model | Share of unsubstantiated responses |
|---|---|---|
| 85% | Gemini 2 | 37% |
| 91% | Gemini 3 | 56% |




