This post was inspired by personal experience using AI—I earned it.
One of the most revealing habits of the AI era is how rarely people push back on what artificial intelligence tells them.
For generations, people were taught to question information sources. Newspapers could be biased. Television commentators could be mistaken. Even textbooks sometimes contained errors. The basic habit of asking “How do we know this?” was considered an essential part of critical thinking.
Artificial intelligence has quietly disrupted that habit.
When people ask an AI system a question, the response usually arrives instantly. It is clearly written, organized, and delivered in a confident tone. The result looks and reads like an expert’s explanation. Because the answer appears polished and authoritative, many users simply accept it and move on.
But AI systems do not actually “know” things. Large language models generate responses by predicting patterns in the data they were trained on. Most of the time, those patterns produce useful and accurate information. Sometimes they produce confident mistakes.
Research suggests that users rarely verify the difference. A 2025 survey of web users found that only about eight percent consistently check AI-generated answers for accuracy. At the same time, most respondents reported encountering significant problems with AI outputs, yet many still rarely follow source links or verify claims. In other words, people often recognize that AI can be wrong—but they do not routinely check it.
This creates a growing trust gap.
Investigations into AI-generated news summaries have shown that more than half contain factual issues or misrepresentations. Studies of large language models also show persistent hallucinations—AI confidently generating plausible statements that are actually false—and overconfident responses, with factual error rates sometimes reaching 30 to 50 percent depending on the task.
The technology itself is not the real problem. AI can be an extraordinarily powerful tool for research, drafting, and exploration.
The difference lies in how people use it.
When users treat AI as the beginning of a conversation—questioning claims, asking follow-up questions, and checking important facts—the results improve dramatically. The system becomes a thinking partner rather than a final authority.
In the long run, the value of artificial intelligence will depend less on the machine’s intelligence and more on the person using it’s curiosity. Tools become powerful when people challenge them, not when they accept them without question.
In the image used with this post, MetaAI contended it was a painting by Carrie Ballantine, an American realist painter known for portraits of the contemporary West.
Actually, it was an AI image I had created with SoraAI. I was simply trying to identify the original image I had used as the basis.
MetaAI, instead of acknowledging the error, said, no, it was yet another Carrie Ballintine painting. MetaAI identified it as at least five other Ballantine paintings, yet in no instance could it show the image or point me to a website where a version could be seen.
It literally argued with me over the provenance of an AI image that had originated with me.
It eventually conceded that it was wrong













