Users build mental models of how interfaces work based on every product they have used before. When your design matches their model, interaction is effortless. When it contradicts their model, confusion and errors follow.
When you sit down at a new car, you don't need to read the manual to know that the steering wheel turns the car, the pedal on the right accelerates, and the one on the left brakes. You know this before you've touched any of it, because every car you've ever been in has worked the same way. You have a mental model of cars -- a simplified internal representation of how they work -- and you apply it instantly to this new one.
Mental models are the internal representations we build of systems, products, and processes. They're simplified -- we don't store the actual physics of how a steering wheel transfers force to the tyres. We store a functional approximation: turn left, car goes left. That approximation is accurate enough to navigate with, even though it's technically incomplete. And it updates slowly -- only when reality contradicts it often enough that the model has to change.
In product design, mental models are the invisible constraint that shapes everything. Users arrive at your interface with expectations built from everything they've used before -- other apps, other websites, physical analogies they've mapped onto digital concepts. When your design matches those expectations, the interface feels intuitive. When it doesn't, users feel confused, even if the design is technically correct. The confusion isn't a failure of the user. It's a mismatch between your implementation model and their mental model.
βThe user's mental model of the UI should match the designer's conceptual model of the product. Any difference is friction.β
β Don Norman, The Design of Everyday Things, 1988
E-commerce checkout has a well-established mental model: cart, shipping, payment, confirm. Users who have bought anything online in the last decade have this sequence loaded. Any deviation from it -- a different order, missing steps, unexpected extra steps -- creates hesitation because it doesn't match the model.
The bad example introduces one small deviation that breaks the model at the most sensitive moment. The good example follows the established sequence exactly, adding clarity without adding novelty.
Payment comes before the delivery address β the reverse of every checkout users have experienced.
Delivery before payment β the order every user expects. No explanatory note needed.
The bad version may have had a technical reason for putting payment first -- perhaps the back-end needed to tokenise the card before calculating shipping costs. That's a legitimate system constraint. But the user's mental model doesn't know or care about back-end constraints. When you're forced to violate a mental model for technical reasons, the cost is always friction -- and the note trying to explain it is evidence that the violation was recognised and managed, rather than avoided.
Before designing any new pattern or terminology, ask: what mental model do users arrive with for this task? Where did that model come from -- other apps, physical analogies, established conventions? And if you deviate from it, what is the user actually gaining that justifies the cognitive cost of updating their model?
The standard answer is "our way is better." Sometimes it is. A genuinely better model -- one that's simpler, more powerful, or more consistent -- is worth the cost of teaching. But "different" is not the same as "better." A novel naming system that achieves the same thing as "folders" but requires new vocabulary is not an improvement. It's a tax on the user's attention that the product collects without returning value.
Norman, D. A. (1988). The Design of Everyday Things. Basic Books. Johnson-Laird, P. N. (1983). Mental Models. Harvard University Press. Nielsen, J. (2010). Mental models. Nielsen Norman Group.