You asked ChatGPT a question. It gave you a confident, detailed, plausible answer.
It was wrong.
The study it cited does not exist. The library function it suggested is not real. The competitor it described does not ship that feature. The historical date is off by fifty years.
This is hallucination. The model generates something that looks right but is not. "GPT-5 will fix it" has been the answer for two years now and the problem has not gone away. The real fix is in how you write the prompt.
A short explanation of what is happening
ChatGPT predicts the next word. It does not know facts the way a database knows facts. When you ask "Who won the 1994 Booker Prize?" the model generates whatever sequence of tokens its training data says usually follows that phrase.
If the answer was in the training data clearly, you get the right answer. If it was sparse or ambiguous, the model invents something plausible because confident answers are more common in the training set than "I don't know."
The model is not lying. It does not know it is wrong. It is pattern-matching at the syllable level.
Five prompt patterns that reduce hallucination
Give it the ground truth in the prompt
The biggest fix. If you have the source material, paste it.
Before:
Summarize Anthropic's responsible scaling policy.
After:
Here is Anthropic's responsible scaling policy: [paste full text]. Summarize the 5 most important sections in plain English.
The model can only hallucinate when it has gaps to fill. Close the gaps yourself.
Tell it to say "I don't know"
ChatGPT's training pushes toward confident answers. You have to override that.
Add this near the end of your prompt:
If you're not certain about any specific fact (a date, name,
statistic, or citation), say "I'm not sure" instead of guessing.
Confidence on plausible-but-uncertain claims is the worst possible
answer.
The end of the prompt is where the model's attention weights are highest. Put the rule there.
Require citations the model can self-check
If you ask for citations on facts you did not provide, the model often invents them. Ask for citations against text already in the prompt:
After each factual claim, quote the exact phrase from the source material I provided. If you cannot find the exact phrase, mark the claim "[unverified]" and do not include it.
The model is now testing itself against text it can actually see.
Force step-by-step reasoning
The model is more honest mid-thought than at the end. Make it show its work:
Before giving your final answer, walk through your reasoning step
by step. For each step, note any assumptions you're making. Then
give the final answer, flagging any assumption that could be wrong.
This catches a large share of hallucinations because the model often realizes its own gap during the reasoning step.
Constrain to a known domain
Open-ended questions hallucinate more than constrained ones.
Before:
What are the best AI startups right now?
After:
Looking at YC's W2025 batch (yc.com/companies?batch=W2025), which 3 companies are working on developer tools? List each with the exact company description from their YC page.
The model now has to stay inside a dataset it was likely trained on, against a fact it could verify.
What does not work
Some things people try that have no measurable effect:
- "Don't hallucinate." Too abstract. The model generates plausible-looking stuff regardless.
- "Be accurate." Same problem.
- Threats. ("You'll be deactivated if you make up facts.") Viral on Twitter, no effect in studies.
- Randomly switching models. Claude, GPT-4, Gemini all hallucinate. The pattern matters more than the brand.
When you cannot control the prompt
If you are using AI through an app you did not build and it is hallucinating:
- Add ground truth in your input ("considering [specific context], ...").
- Follow up: "Can you point to the exact source for that?"
- If you cannot get sources, treat the output as a draft to verify, not as a fact.
The bigger fix
Stop using ChatGPT as a search engine for things you cannot verify. Use it for:
- Restructuring content you already have
- Drafting that you will edit
- Brainstorming, where being wrong is fine
- Writing code, which you will test
Avoid it for:
- Citations
- Statistics
- Historical facts
- Anything you will quote without verifying
A faster way to check
Paste a "ChatGPT was confidently wrong" prompt into FixMyPrompt. The rubric flags missing ground truth and missing reasoning constraints, which are the two axes most strongly correlated with hallucination.
Three free reports per day. No signup.