Humane design is an approach to building technology that prioritizes human wellbeing over engagement and profit. Instead of designing apps, platforms, and devices to capture as much of your attention as possible, humane design asks a different question: is this technology actually good for the person using it? The concept centers on aligning technology with human values like focus, autonomy, and mental health, rather than exploiting psychological vulnerabilities to keep you scrolling.
The Problem Humane Design Responds To
Most digital products are built around engagement metrics. Companies measure success by how many daily active users they have, how long those users stay on screen, and how often they return. To hit those numbers, designers use techniques borrowed from behavioral psychology: variable rewards like social media “likes” that work on the same principle as slot machines, persistent notifications that pull you back to your phone, and infinite scroll feeds that remove natural stopping points.
These patterns have measurable consequences. Research published in PLOS Digital Health found that persuasive design strategies common on social media platforms, including reward systems like likes and shares and notifications that push users to check devices throughout the day, are linked to increased anxiety, aggression, social media addiction, and sleep deprivation in children. The core issue is that what’s good for the company’s engagement numbers is often bad for the person holding the phone.
Where the Movement Started
The term gained traction through the work of Tristan Harris, a former Design Ethicist at Google. In the early 2010s, Harris noticed that attention-harvesting design was becoming the default across social media and digital platforms, and that it was deteriorating people’s ability to focus, weakening relationships, and harming mental health. He created a presentation called “A Call to Minimize Distraction & Respect Users’ Attention,” which went viral inside Google and beyond, launching what became known as the “Time Well Spent” movement.
In 2018, Harris co-founded the Center for Humane Technology with technologists Aza Raskin and Randima Fernando. The organization initially focused on social media harms but has since expanded to address broader concerns about how technology shapes society, including artificial intelligence.
Dark Patterns vs. Humane Alternatives
One of the clearest ways to understand humane design is by contrasting it with what the industry calls deceptive patterns (formerly “dark patterns”). The Nielsen Norman Group defines a deceptive pattern as a design that prompts users to take an action benefiting the company by deceiving, misdirecting, shaming, or obstructing the user’s ability to choose differently. Humane design is essentially the opposite: making the user’s best interest the default.
Common deceptive patterns include obstruction (making it unreasonably difficult to cancel a subscription), nagging (repeatedly asking you to enable notifications after you’ve declined), visual tricks (using confusing double negatives on privacy settings so you accidentally opt in), and emotionally manipulative designs (guilt-tripping language when you try to unsubscribe, like “No thanks, I don’t want to save money”). Behavioral economists use the term “sludge” for friction deliberately placed between you and a choice that would benefit you, like burying a cancellation button behind five screens of retention offers.
Humane design replaces these tactics with transparency. Cancellation should be as easy as sign-up. Privacy settings should use plain language. Defaults should protect the user, not the company’s data collection goals.
What Humane Design Looks Like in Practice
You’ve likely already encountered humane design features, even if you didn’t think of them that way. Apple’s Screen Time tool lets you track how long you spend in each app and set daily limits. Gmail lets you customize notifications by priority so only important emails interrupt you, while less urgent messages arrive silently. Apple’s Sleep Focus mode silences all non-urgent notifications during sleeping hours but allows calls from select contacts to come through. WhatsApp lets you mute notifications from specific people or groups.
These features share a common thread: they give you control over how technology demands your attention, rather than letting the platform decide for you. Notification batching (delivering non-urgent alerts in groups at set times instead of one by one throughout the day) is another example. So is removing infinite scroll in favor of natural content boundaries, or showing you how long you’ve been using an app with gentle reminders.
The Principles Behind It
Humane design draws on cognitive science, particularly the idea that human attention and working memory are limited resources that deserve protection. Research on cognitive load theory shows that clean, simple interfaces with clear language and minimal distractions help people process information more effectively. Techniques like chunking content into smaller segments, removing purely decorative visual elements, and placing related text and images close together all reduce the mental effort required to use a product.
At a higher level, humane design rests on a few core commitments:
- Respect for attention. Your time and focus are finite. Design should help you accomplish what you came to do, not siphon your attention toward something more profitable for the platform.
- User autonomy. You should be able to make informed choices about how a product works, what data it collects, and how much of your time it takes. Defaults should favor your interests.
- Transparency. If an algorithm is choosing what you see, you should understand the basic logic behind it. If a design is trying to influence your behavior, that should be visible, not hidden.
- Minimal intrusion. Technology should fit into your life and workflow smoothly, reducing friction rather than creating it. Tools should solve problems without generating new ones.
Measuring Success Differently
Traditional tech companies optimize for engagement: time on site, clicks, daily active users. Humane design proposes a fundamentally different metric. The “Time Well Spent” framework, developed through five years of primary research, measures whether users consider the time they spent with a product to be valuable. It asks three questions: did the product get the job done for the user, how engaged were they with the experience, and did they consider it worth their time?
This is a subtle but important shift. A social media app might score high on traditional engagement because users compulsively check it 80 times a day, but score poorly on Time Well Spent because those users feel worse afterward. Humane design pushes companies to optimize for the latter measure, even if it means shorter sessions or fewer page views.
Humane Design and AI
As artificial intelligence becomes embedded in more products, humane design principles are expanding to cover algorithmic systems. A framework developed at Duke University outlines several principles specific to AI. The most central is “human in the loop,” meaning AI should support human decision-making rather than replace it. AI can reduce cognitive load, organize information, and accelerate work, but humans should remain responsible for final decisions.
Other AI-specific principles include participatory design, where the people who will actually use a tool help define the problem and test prototypes rather than having solutions imposed on them, and stakeholder ownership, where the people closest to the work can adjust the tool as their needs change without needing engineers to rebuild it. Privacy and security remain non-negotiable baselines regardless of how the AI is deployed.
Formal Standards
Humane design isn’t just a philosophy. It’s starting to be codified into industry standards. IEEE 7000-2021 establishes processes for organizations to incorporate ethical values throughout technology development, from early concept exploration through final design. The standard focuses on transparent communication with stakeholders, traceability of ethical values through every stage of design, and ethical risk assessment. It applies to organizations of all sizes and is compatible with any existing development lifecycle.
These formal standards give designers and engineers something concrete to point to when pushing back against exploitative design decisions within their organizations. They transform “this doesn’t feel right” into “this doesn’t meet our ethical design requirements,” which carries more weight in a business setting.

