Made with 🧠 and πŸ«€ by Youssef Bouksim

Back to library
πŸ“¨

Postel's Law

Build systems that absorb the variability of human input rather than rejecting it. Accept any reasonable format, normalise behind the scenes, and produce consistent, precise output every time.

5 min readUX Β· Product Β· AI

In 1980, Jon Postel β€” one of the engineers who designed the core protocols of the internet β€” wrote a principle into the specification for the Transmission Control Protocol: β€œBe conservative in what you do, be liberal in what you accept from others.” He was writing about network software, but the principle describes something universal about the relationship between any system and the humans who use it.

Humans are inconsistent. They spell things differently, format data differently, express the same intention through completely different words, and make the same mistakes repeatedly. A system that demands precise, correctly formatted input at every step is a system that spends most of its energy enforcing its own expectations on users rather than doing the thing users came for. Postel's Law inverts this: make the system flexible on input, precise on output. Absorb the mess. Produce clarity.

✦ Key takeaways
βœ“
Input flexibility is a feature, not a concession. Accepting phone numbers in any format, names with inconsistent capitalisation, or dates written differently is not lowering the bar β€” it is correctly distributing the work of normalisation to the system, which can do it reliably, rather than the user, who cannot.
βœ“
Zero results is almost always wrong. When a user types a query and gets nothing back, the system has chosen to enforce its own data model over the user's intent. A system practising Postel's Law asks: what did the user most likely mean? Then it answers that question, transparently, rather than returning empty results.
βœ“
AI makes this principle dramatically more achievable. For most of computing history, β€œbe liberal in what you accept” required explicit engineering for every edge case. Natural language models can now infer intent from vague, incomplete, or ambiguous input and produce well-structured output β€” which is Postel's Law operating at the level of meaning rather than format.
β€œBe conservative in what you do, be liberal in what you accept from others.”
β€” Jon Postel, RFC 761, 1980

Phone number formats β€” whose job is normalisation?

A phone number is a phone number. Whether someone types β€œ555 867 5309”, β€œ(555) 867-5309”, β€œ555.867.5309”, or β€œ+1-555-867-5309”, the intent is identical and the information is the same. The system knows how to store a normalised phone number. The question is whether the work of arriving at that format belongs to the user or the system.

Rejecting any format that doesn't match a specific pattern β€” β€œPlease enter numbers only, no spaces or dashes” β€” is the system asserting its own convenience over the user's. Accepting any reasonable format and normalising it behind the scenes is the system doing its job.

Before β€” strict format rejects variation
Add Contact
S
Sarah Miller
New contact
Phone number *
(555) 867-5309
Numbers only β€” remove spaces, dashes, parentheses
Accepted formats
(555) 867-5309
555 867 5309
555.867.5309
+1-555-867-5309
5558675309 only
Invalid format
One acceptable format out of many natural ones. Every other way a human naturally writes a phone number is rejected.
After β€” any format accepted
Add Contact
S
Sarah Miller
New contact
Phone number
(555) 867-5309
Any format accepted
Saved as
+1 (555) 867-5309
All of these work
(555) 867-5309
555 867 5309
555.867.5309
+1-555-867-5309
5558675309
Save number
Any natural format, normalised behind the scenes. The system stores +1 (555) 867-5309 regardless of how it was typed.

The normalised output shown in the good example is the conservative half of Postel's Law: regardless of what format the user typed, the system stores and displays phone numbers in one consistent, internationally readable format. Liberal in, conservative out.


Search β€” zero results is almost always a failure

When a user types a search query with a typo and the system returns zero results, it has made a choice: enforce exact matching over user intent. The user knows what they meant. The system knows what they typed. Only one of those is useful information for deciding what results to show.

β€œDid you mean…” is Postel's Law applied to search. The system infers the most likely intended query, returns results for that query, clearly discloses what it did, and allows the user to override if the inference was wrong.

Before β€” typo returns nothing
yourapp.com/search?q=cognitve+load
Docs
cognitve load
No results found
No results for β€œcognitve load”.
Check your spelling and try again.
One missing letter in β€œcognitive” β€” zero results and a blank page. The system knows exactly what the user meant and chose not to help.
After β€” infers intent, shows results
yourapp.com/search?q=cognitve+load
Docs
cognitve load
Showing results for cognitive load
3 results for β€œcognitive load”
docs.yourapp.com
Reducing cognitive load in onboarding flows
How to apply cognitive load theory to simplify user onboarding β€” progressive disclosure, chunking, and defaults.
docs.yourapp.com
Cognitive load and form design
Why long forms feel hard: the relationship between field count, cognitive load, and completion rates.
docs.yourapp.com
Design principles: cognitive load
A reference guide to cognitive load β€” intrinsic, extraneous, and germane β€” with design applications.
Same typo β€” three relevant results. The correction is clearly shown and the user can override it.

The β€œShowing results for” banner is the conservative output half of Postel's Law: the system doesn't silently correct and pretend the typo never happened. It transparently states what it did and gives the user an explicit path to override the inference.


AI and ambiguity β€” proceeding vs interrogating

The most common failure mode of AI assistants when faced with an ambiguous or underspecified request is to respond with a list of clarifying questions. The request was vague. Now the user has four questions to answer before anything useful happens.

Postel's Law applied to AI says: be liberal in what you accept. An AI practising this makes a reasonable set of assumptions, proceeds with a useful response, and states its assumptions clearly so the user can redirect if needed. The clarification loop collapses from four exchanges into one.

Before β€” asks everything before helping
AI Assistant
Write me an email about the project update
Before I can help, I need some details:
1. Who is this email for β€” a client, your manager, or the whole team?
2. What is the project, and what specifically should the update cover?
3. What tone would you like β€” formal, casual, or somewhere in between?
4. Is there a specific call to action or next step to include?
5. How long should the email be?
Answer the questions above...
5 questions before a single word is written. The user came for a draft. They got a questionnaire.
After β€” assumes, proceeds, discloses
AI Assistant
Write me an email about the project update
Subject: Project update β€” quick summary

Hi team,

Quick update on where things stand: we're on track with the current milestone and expect to hit the next checkpoint on schedule.

Let me know if you have any questions.

[Your name]
Assumptions made
Addressed to your team Β· Professional but approachable Β· Brief status update
Different audience or tone? Tell me
Make it more formal
Write for a client
Add next steps
Adjust anything...
A complete draft with assumptions listed below. Adjustments are one tap away. The ambiguity was absorbed, not returned.

The assumption card is the Postel's Law disclosure mechanism: the system proceeded liberally, and now conservatively states exactly what it assumed so the user can verify or correct. One working draft plus transparent assumptions is almost always more useful than five clarifying questions.


Applying this to your work

Postel's Law asks a single clarifying question about every input field, search box, and conversational interface in your product: are we making the user do work that the system could do instead? Normalising a phone number, correcting a typo, inferring intent from a vague request β€” all of these are tasks that a system can perform more reliably than a human, every single time, at no marginal cost.

The conservative output principle is equally important. Flexibility in what you accept does not mean imprecision in what you produce. Phone numbers stored in inconsistent formats cause downstream failures. Search corrections that aren't disclosed erode trust. AI assumptions that are hidden lead to confusion when the output is wrong. Accept flexibly, output precisely, and always make your inferences visible.

βœ“ Apply it like this
β†’Accept phone numbers, postal codes, dates, and names in any reasonable format β€” strip, normalise, and store consistently behind the scenes.
β†’Never return zero search results if fuzzy matching, stemming, or spelling correction can produce relevant results β€” show what you found and disclose the correction.
β†’When AI input is ambiguous, make a reasonable default assumption, produce a working output, and state the assumption clearly with an easy path to redirect.
β†’Validate input liberally and output conservatively β€” what the user gives you can vary freely, what the system stores and displays should always be consistent.
βœ— Common mistakes
β†’Rejecting phone numbers, emails, or dates that don't match a specific pattern β€” the system is enforcing its own convenience over the user's natural behaviour.
β†’Returning empty results for a query with an obvious typo β€” this is the system choosing data-model purity over user utility.
β†’Responding to vague AI requests with a list of clarifying questions before producing anything β€” this inverts the work allocation that Postel's Law prescribes.
β†’Silently correcting or inferring without disclosure β€” liberality in input must be paired with transparency about what the system assumed.

Postel, J. (1980). DoD Standard Transmission Control Protocol. RFC 761. Information Sciences Institute, USC. The principle was later stated more explicitly in RFC 793 (1981) and has been applied as a software design principle far beyond its original network context.