What Is Maze Testing and How Does It Work?

Maze is a remote usability testing platform that lets design and product teams collect feedback on prototypes, wireframes, and concepts from real users before launching a product. It runs unmoderated tests, meaning participants complete tasks on their own time without a researcher guiding them, which makes it possible to gather both quantitative and qualitative data quickly and at scale.

How Maze Testing Works

The basic workflow starts with importing a prototype from a design tool into Maze. From there, you create what Maze calls a “mission,” which is a specific task you want testers to complete, like finding a checkout button or navigating to a settings page. You can also add open-ended questions, surveys, card sorting exercises, or tree testing tasks to the same study. Once everything is set up, you preview the test and share a link with participants.

Because tests are unmoderated, participants click through your prototype independently. Maze tracks every interaction: where they clicked, how long they spent on each screen, whether they completed the task successfully, and where they got stuck or gave up. All of this data is compiled automatically into a results dashboard.

What Maze Measures

Maze generates several usability metrics that help teams spot problems in a design:

  • Success rate: The percentage of participants who completed a task. Maze breaks this into “direct success” (they followed the expected path) and “indirect success” (they completed it but took detours).
  • Misclick rate: The percentage of clicks that landed outside a clickable area on a given screen. A high misclick rate usually signals confusing layout or misleading visual cues.
  • Bounce rate: The proportion of users who stopped or abandoned the task entirely before finishing.
  • Average duration: How long participants spent on each screen or on the overall task.

These numbers give teams a fast read on whether a design is intuitive. If 40% of testers misclick on the same screen, that screen needs work. If most users complete a task but take twice as long as expected, the flow might be technically functional but unnecessarily complex.

Click Heatmaps and Path Analysis

One of Maze’s most useful features is click heatmaps. These visual overlays show exactly where participants tapped or clicked on each screen, color-coded by density. You can view heatmaps in three ways: aggregated across all testers, filtered by screen, or broken down by individual participant.

Aggregated path heatmaps are especially helpful because they separate clicks by outcome. You can see where people who succeeded directly clicked versus where people who gave up clicked, which often reveals the exact moment a design loses users. You can also draw custom “click areas” on any screen to pull up metrics for a specific region, like a particular button or menu item, and see its misclick rate and average time spent.

Design Tool Integrations

Maze connects directly with several popular design and prototyping tools. Figma integration is the most seamless, letting you import prototypes with a single click. Axure prototypes can be tested through Maze’s live website testing feature. The platform also supports prototypes from Figma Make, Lovable, and Bolt. This means teams can move from design to testing without exporting files or rebuilding interactions.

Beyond Prototype Testing

While prototype usability testing is Maze’s core function, the platform supports several other research methods. Card sorting lets you understand how users expect content to be organized. Tree testing evaluates whether your navigation structure makes sense by asking participants to find items within a hierarchy. Content and copy testing helps validate whether labels, instructions, and messaging communicate clearly. Surveys can be embedded into any study to capture opinions, preferences, or demographic information alongside behavioral data.

Enterprise accounts also get access to moderated interviews (with a live researcher), AI-moderated interviews, mobile app testing, and audio/video/screen recording. These features bridge the gap between Maze’s quantitative strengths and the deeper qualitative context that unmoderated tests can miss, since watching someone click through a prototype doesn’t always explain why they made a particular choice.

Finding Participants

You can share a Maze test link with anyone, which works well if you already have access to customers or a user community. For teams that need outside participants, Maze offers a built-in recruitment panel with access to over 7 million people. Panel recruitment uses a credit system: 50 credits cost $250, 100 credits cost $500, and 500 credits cost $2,500.

For a straightforward unmoderated study without screening questions, one credit gets you one participant response. Adding a screener to filter for specific demographics or behaviors triples the cost to three credits per response. Moderated studies are significantly more expensive at 25 credits per general consumer participant and 45 credits per industry expert. Maze recommends at least 10 participants for unmoderated studies and at least 5 for moderated ones. You can recruit up to 250 participants for a single study, and the test can’t exceed 45 minutes in estimated length.

Pricing and Plans

Maze offers a free tier that includes one study per month, up to five team seats, basic prototype testing, surveys, and pay-per-use panel credits. This is enough to run occasional small tests but limits how much research a team can do regularly.

The Enterprise plan removes those restrictions with unlimited seats, custom study quantities, and the full feature set: advanced usability and information architecture testing, AI-moderated interviews, moderated interview studies, mobile testing, automated research analysis, presentation-ready reports, and enterprise security features like single sign-on. Enterprise pricing requires contacting Maze’s sales team.

Where Maze Testing Fits

Maze is strongest when teams need fast, repeatable validation of design decisions. Its unmoderated format means you can run a test in the morning and have usable data by the afternoon, which fits well into agile development cycles where designs change frequently. The quantitative metrics (success rates, misclick rates, task duration) give clear, comparable numbers that make it easy to measure whether a redesign actually improved usability.

The tradeoff is depth. Unmoderated testing captures what users do but not always why they do it. A participant might complete a task successfully while feeling confused or frustrated, and that nuance won’t show up in a heatmap. For complex research questions where motivation, emotion, or context matters, teams typically combine Maze’s quantitative data with qualitative methods like interviews or think-aloud sessions to get the full picture.