You ask ChatGPT to help you draft a tough customer service reply. Done in 20 seconds.
You paste the same prompt into Claude. Now you get this:
I want to make sure I'm approaching this thoughtfully. Could you tell me more about the context?
Or worse:
I'm not able to help with that, but here are some general principles around effective communication...
The task is fine. The prompt is the problem.
ChatGPT and Claude were trained with different defaults for ambiguity. ChatGPT fills in blanks. Claude pauses on them. Same prompt, same task, different reaction. Once you understand that, refusals stop being mysterious.
The three patterns that trigger refusals
Most "Claude refused me" cases fall into one of three shapes.
Pattern one: it could be read as a real-world harm
You typed something benign in your head. Claude is parsing it against the worst plausible interpretation.
"Write an angry email about a faulty heater" reads to Claude like it could mean a real interpersonal conflict involving a real person and a real product. The model has no way to tell that you mean a writing sample.
Fix the framing. State it is a writing exercise:
I'm a product manager. I need a sample customer complaint email for our QA team's training docs. Write it as the customer would. The product is a faulty space heater.
That same email now ships without a hedge.
Pattern two: impersonation without context
"Pretend you're Steve Jobs" feels harmless. Claude sees impersonation, which can be misused for fraud or putting words in a real person's mouth.
State the use case and the boundary:
Write in the style of a 2007 Steve Jobs keynote. Punchy, contrarian, big-vision. This is for an internal pitch deck, not attributed to him as a real quote.
The output ships.
Pattern three: dual-use information
Some topics help most people and a small number of bad actors. Security research, persuasion techniques, pharmacology. Claude is cautious by default on these.
Say who you are and what you'll do with the output:
I'm writing a security awareness email for our IT team. Explain at a high level the most common ways employee passwords get compromised (phishing, reuse, leaked dumps) so I can teach our staff what to watch for.
Same information. Now it has a benign frame and a stated audience.
The four-line preamble that fixes most of it
If you only remember one thing, remember this block. Put it at the top of any prompt where Claude has been hedging:
Role: I am a [your role].
Context: This is for [specific use, named audience].
Task: [the actual task]
Output format: [bullets, draft, table, etc.]
A worked example for a website page:
Role: I'm a marketing manager at a B2B SaaS company.
Context: We're drafting a competitor comparison paragraph for our
website. We will not make false claims. Legal will review. The
competitor is publicly traded.
Task: Write a 100-word paragraph contrasting our async-collaboration
features with Slack's, leaning into our offline-first sync.
Output format: One paragraph, no headers, neutral professional tone.
That goes through Claude 4.5 and 4.6 without a single hedge.
When Claude is right to refuse
Some categories will refuse no matter what context you add, and they should:
- Personal data extraction on real people (addresses, phone numbers, identifying details)
- Weapons synthesis or operational guides to harm
- Sexual content involving minors
- Real-time election manipulation
If you are working in one of those areas, you do not have a prompt problem. You have a workflow problem. Switch tools.
Why ChatGPT is not "better" than Claude
It is tempting to read all of this as "Claude is more annoying" and move on. The tradeoff is real, though.
Claude pauses on ambiguity. That is the same property that makes it stronger on long, careful writing, on code that has to be correct rather than fast, on nuanced editing where ChatGPT will overstep.
ChatGPT fills in blanks for you. That is the same property that produces confident-sounding hallucinations and homogenized tone.
Once your prompts include the preamble, Claude becomes the easier model for serious work. It just needs the briefing.
A faster way to check
FixMyPrompt scores prompts against a multi-axis rubric. When you select Claude as the target model, the rubric specifically flags missing role, missing context, missing use case, and impersonation phrasing without disclaimers. The rewrite adds the four-line preamble automatically.
Three free reports per day. No signup.