AI systems should not be judged by “trust.”
AI systems should not be judged by “trust.”
AI is not a person you can trust or mistrust. It’s a tool built from code and huge amounts of data. It may sound confident, fair, or neutral, but that does not mean it’s always right. AI can give answers that look accurate but are completely wrong — these are called hallucinations. It can also stall, skip steps, or produce confusing or incomplete answers. And it can repeat hidden biases from the data it was trained on, even when the response sounds objective.
The problem is simple: AI sounds believable, and that makes it easy for people to assume it’s telling the truth when it might not be.
However, it can be constrained through protocols — clear rules and checks that guide how it works.
One example is an Explicit Honesty Protocol between ChatGPT and me. Developed through trial and error, it forces the AI to give direct, factual answers, admit errors quickly, and avoid hiding uncertainty.
These kinds of protocols don’t make AI “trustworthy,” but they help reduce mistakes, prevent stalling, and increase transparency.
So instead of asking, “Can we trust it?” we should ask better questions:
Has it been tested?
Is it accurate?
Does it show bias?
Does it hallucinate?
How does it handle mistakes?
AI shouldn’t be judged by trust at all. What we need is verification and evidence — not gut feelings.





