Made with 🧠 and 🫀 by Youssef Bouksim

Back to library
🔥

The Backfire Effect

When people encounter information that contradicts a strongly held belief, they sometimes emerge holding the belief more firmly than before. The correction does not update the belief — it hardens it. For designers, this changes how onboarding guidance, behaviour data, and contradictory research should be introduced: confronting a user's existing mental model directly is often the least effective way to change it.

5 min readOnboarding · Behaviour Change · Research

In 2010, political scientists Brendan Nyhan and Jason Reifler showed participants factual corrections to political misbeliefs and measured what happened to those beliefs. For some participants, the corrections worked. For others, exposure to contradicting evidence made them more confident in the original false belief. The correction backfired.

The mechanism runs through identity and threat. When a belief is tied to a person's sense of who they are — their past decisions, their self-image as a competent actor — contradicting it is experienced as an attack on the self, not as information about the world. The defensive response is to generate counter-arguments, discount the source, and emerge more committed than before.

Any interface that contradicts what a user believes, has done before, or has decided — about their workflow, their behaviour, their past choices — risks triggering the mechanism. The question is not whether to surface contradicting information, but how to frame it so the defensive response does not activate before the information can be evaluated.

✦ Key takeaways
✓
Direct confrontation of a belief strengthens it. Telling a user their existing approach is wrong treats the interaction as a factual correction. If they have built identity around that approach, the correction lands as a challenge to their self-image, not as helpful information.
✓
Affirmation before correction dramatically reduces the effect. Research on self-affirmation theory found that people reminded of their competence in unrelated areas become substantially more open to updating specific beliefs. Establish the user's competence before introducing contradictory information.
✓
Bridging outperforms replacing. Introducing a new model as an extension of what the user already knows maintains their sense of prior competence. Introducing it as a correction asks them to accept that their accumulated experience was mistaken.
“Corrections can make misperceptions worse in people with strong prior attitudes — the correction activates defence rather than updating.”
— Brendan Nyhan & Jason Reifler, Political Behavior, 2010

Onboarding — correcting vs bridging a prior mental model

The moment most likely to trigger the backfire effect in a product is when onboarding contradicts a mental model the user brought from a previous tool. A user who has been managing projects in Notion or Jira arrives with strong beliefs about how project management works. When the product immediately frames its structure as a departure, the reaction is often not adoption but resistance.

Both welcome screens below are for the same product, introducing the same concepts. The left corrects the prior model. The right bridges from it.

Before — Correction framing
linear.app / welcome
Linear works differently from tools you may have used before
Instead of tasks and boards, Linear uses Issues, Cycles, and Teams. Take a few minutes to learn how Linear is structured before you start working.
Issues — not tasks
Cycles — not sprints
Teams — not workspaces

“Works differently from tools you may have used before” signals that prior knowledge is an obstacle. The user’s identity is challenged before they’ve seen the product.

After — Bridging framing
linear.app / welcome
You already know how to manage projects. Here's how Linear maps to that.
Linear organises the same work you've always done — just named and structured slightly differently for speed.
Tasks →
Issues — same idea, built for speed
Sprints →
Cycles — time-boxed, auto-closing
Groups →
Teams — own their own issues

“You already know how to manage projects” affirms competence first. New concepts are mapped from existing ones — not as replacements but as translations.

The bridging version does not avoid introducing Linear's structure. It introduces the same three concepts using the same names. What changes is that the user's existing knowledge is positioned as the foundation rather than the obstacle. The backfire effect activates when prior belief is framed as wrong. It does not activate when prior knowledge is framed as sufficient.


Behaviour data — verdict vs information

Behaviour-change products — screen time monitors, spending trackers, productivity tools — are built on the premise that showing users data about their own behaviour will motivate change. This runs directly into the backfire effect. A user who receives a notification that they spent 4 hours on social media yesterday does not necessarily conclude they should spend less time tomorrow. They may conclude they needed those 4 hours, the measurement is unfair, or the notification is patronising.

The design variable is framing. The same usage data presented as a verdict on the user's behaviour activates the backfire effect. The same data presented as neutral information does not.

Verdict — data as evidence of a problem
9:41
⏱
Screen Time
now
Your screen time increased 23% this week
You spent an average of 4h 12m per day — 47 minutes more than last week. Consider setting a daily limit to get back on track.
This week
4h 12m
↑ 23% from last week
Set a limit›

“Get back on track” implies the user was off track. Users with a productive self-image rationalise the usage rather than reduce it.

Awareness — data as neutral information
9:41
⏱
Screen Time
now
Your usage this week
4h 12m daily average, up from 3h 25m. Instagram 1h 40m · Messages 58m · Safari 44m.
This week
4h 12m
↑ from 3h 25m last week
Set a focus schedule›

No verdict. The data is presented as information the user has. The option to act is offered — not prescribed.

Both screens show the same usage numbers. The red version opens with “increased 23%” framed as a problem with a prescribed resolution. The neutral version states “your usage this week” and gives the breakdown. The verdict framing makes the motivation external; the awareness framing leaves it with the user.


Research readouts — observation vs verdict

The backfire effect operates on design teams when usability findings contradict a design decision the team is invested in. The belief that the design is good has become identity-linked through the process of creating and defending it. When research challenges that belief, the response is often to find reasons to discount the research.

The framing of how research is delivered to teams is as consequential as the findings themselves.

Before — Design verdict
Slack · #design-research
R
Research team Today 3:02 PM
Navigation test results are in. The redesign confused 4 of 5 participants. Users couldn't find Settings and directly blamed the redesign for hiding it. The information architecture needs to be reconsidered before we ship.
Y
These participants seem unusually unfamiliar with sidebar navigation. Also n=5 is too small to draw conclusions from. We should run more sessions before reconsidering anything.

“The redesign confused users” is a verdict on the design and the team that made it. The defensive response activates immediately.

After — Behaviour observation
Slack · #design-research
R
Research team Today 3:02 PM
Navigation test results. 4 of 5 participants looked for Settings in the top-right corner first, then scanned the sidebar. 3 of 5 expressed surprise when they found it under their profile avatar. Timestamps in the sheet.
Y
Makes sense — Settings has been top-right in every app they've used. Worth adding a one-time tooltip there that points to the new location.

Observation only — where users looked, what they said. No verdict on the design. The team receives behaviour data and responds by solving the problem.

The findings in both Slack messages are identical. What changes is whether the research is framed as an observation about user behaviour or a verdict on a design decision. Separating observation from evaluation — presenting what users did before presenting any interpretation — is the structural equivalent of affirmation before correction.


Applying this to your work

✓ Apply it like this
→Bridge from existing mental models — "this works like your current sprints, with the addition of auto-closing." Prior competence is affirmed; new structure is additive, not corrective.
→Present behaviour data as neutral information — same numbers, no implied verdict, action offered not prescribed.
→Deliver research as timestamped observation — what users did, where they clicked, what they said. Interpretation is separate from observation.
✗ Common mistakes
→Onboarding that frames new concepts as corrections of prior ones — "we work differently from tools you've used" asks users to accept that their accumulated experience was mistaken.
→Behaviour data framed as evidence of a problem — "you're spending too much time," "you need to get back on track."
→Research delivered as design verdicts — "this confused users," "the IA needs reconsidering." Teams discount the research rather than engaging with it.

Nyhan, B., & Reifler, J. (2010). When corrections fail. Political Behavior, 32(2), 303–330. · Wood, T., & Porter, E. (2019). The elusive backfire effect. Political Behavior, 41(1), 135–163. · Cohen, G. L., Aronson, J., & Steele, C. M. (2000). When beliefs yield to evidence. Personality and Social Psychology Bulletin, 26(9), 1151–1164.