ChatGPT is not a reliable source of information, but also web search sucks

I have complex feelings about large language models. I use them for a narrow set of tasks where they’ve proven to be useful, but I’ve found that their usefulness outisde of those tasks is overstated by the tech industry. It worries me to see LLMs shoehorned into applications they’re not suited for (such as therapy or health advice).

More importantly, I’m concerned that these technologies are being built and sold by people who are either morally bankrupt or lack the humanities education to understand the effect their work has on society. I can’t escape the feeling that LLMs are being forced into my life by tech billionaires and investors who have their own agendas. I find this unacceptable and, frankly, insulting.

But that’s a conversation for another day. In this post, I want to talk about something a bit more nuts-and-bolts: replacing traditional search engines like Kagi or Google with LLMs when looking up information online.


My friends, family, and colleagues have increasingly been turning to ChatGPT when they want to look up information on the internet, entirely bypassing traditional search engines and even the web as a whole. I believe this is a dangerous practice for two reasons.

(My definition of “looking up information on the internet” includes everything from simple factual questions to relationship advice to recipes to searching for news about recent events. Basically, everything you would’ve typed into the Google search box five years ago.)

First, for every question you ask ChatGPT, it confidently presents you with a singular answer and pretends that it’s the only correct answer to your question, some sort of oracular edict handed down from up high. It eliminates nuance and flattens diverse viewpoints into an insipid mulch of caveats and doublespeak. It strips the answer of all context, leaving only raw “facts” that appear to be answer-shaped, but may or may not represent the truth you were looking for.

Second, LLMs can be sycophantic to some degree, despite researchers trying their best to prevent such behavior. They desperately want to please, and in doing so they will happily adjust their output to conform to your personal belief systems. Depending on what it knows about you and how you phrase your prompts, ChatGPT will gleefully lie to you if those lies will make you more likely to keep using the app. This traps you in a bubble where you are never confronted with information that might violate your personal convictions.

Those are my two primary arguments against using ChatGPT or Claude as your main sources of information. When I started writing this blog post, my goal was to present the above arguments and support them with examples of search queries where searching the web using a traditional search engine yielded better results than asking ChatGPT. And as I scoured my search history, I did find several such queries.

However, this is where my arguments started falling apart.

While it’s true that I got more nuanced, more diverse, more human answers when I plugged a set of keywords into a search engine, the process of dredging up those answers involved wading through a garbage heap of SEO content, paywalled journal articles, advertisements, irrelevant social media posts, and straight-up scams.

For example, while looking for critical essays about Alain Fournier’s novel The Lost Estate, I had to wade through pages upon pages of results from websites that wanted me to pay $9/mo to access summaries of popular books, or offered help with writing high school papers, or wanted to sell me study notes “written by Harvard students”. These were the results I got back from Kagi, a search engine that costs $10/mo to use and is ostensibly designed to cut down on ads and SEO spam. Google’s results were even worse!

In contrast, I was able to ask ChatGPT specific questions about the book and get middling but spam-free answers. I was able to use its Deep Research feature to get the robot to wade through the garbage search results and give me a list of links to high-quality essays. This was a far better experience than manually sorting through hundreds of links myself, and returned the same handful of essays I’d found by sifting through Kagi’s results.

And now, I don’t know how to feel about any of this. Hey Siri, insert Abed Nadir I need help reacting to something GIF here.

I almost didn’t publish this blog post because I’m not sure where I stand on this issue anymore, but I figure I should at least register the dissonance I feel about the state of our digital world in the cursed year 2025.

On the one hand, I still firmly believe that blindly accepting AI generated answers can only end in tears if you actually care about truth, accuracy, or diverse viewpoints. On the other hand, I understand why so many people want to just ask the machine for an answer instead of working to find it themselves. Nobody has hours to waste every day refining search queries, manually filtering out garbage results, and reading the first five hundred words of an article only to find that the rest of it is paywalled.

Our information systems are falling apart. The web is full of spam, malware, scams, ads, popovers, and tracking. High quality sources of information are increasingly behind paywalls. Search engines either don’t care about the garbage clogging up their indexes (Google) or fail to clean them up despite trying their best (Kagi). I can’t blame my friends and family for turning to LLMs to find information when the experience of looking for it on the web is so poor.

I have a feeling the long-term solution to this problem can only be more human curation. What that will look like and how it will be financially sustainable is anybody’s guess. All I can say for sure is that the current state of affairs will push more people to seek out answers from LLMs, which is bad for individuals and bad for society.

I personally use both ChatGPT and Kagi for most of my search queries. I don’t think I’ll ever be able to fully trust an AI generated answer without verifying it first. There is simply no telling how the LLM sausage is made. Today the danger might be a hallucinated answer, tomorrow it might be unwanted advertising, but five years from now? I’m not willing to believe that LLMs will be impervious to tampering by nation states who want to rewrite history.

So for as long as I’m able to, I’ll verify everything that comes out of an LLM and keep encouraging people to turn to web search, even if it sucks. But I understand the pain of navigating the web today, so maybe I’ll try being less of a jerk to people when they tell me they get all their information from ChatGPT.