Human-robot interaction (HRI) is the study of how people and robots communicate, collaborate, and coexist. It spans everything from a surgeon guiding a robotic arm through a procedure to a child talking to a companion robot at home. The field draws on robotics, psychology, computer science, design, and ethics to make these exchanges feel natural, safe, and useful.
HRI has two core dimensions: a social component, driven by the fact that people instinctively treat robots as agents with intentions and expectations, and a physical component, where the forces, movements, and spatial awareness between a human and robot need to be carefully coordinated. Understanding both sides is what makes the field genuinely interdisciplinary.
How Humans and Robots Interact
Not all human-robot interaction looks the same. Researchers categorize it along several axes. One of the most basic is proximity: interactions can be remote (a bomb disposal operator controlling a robot from a safe distance) or proximate (a factory worker assembling parts alongside a collaborative robot). They can also be brief, one-time encounters or repeated, long-term relationships, like an elderly person living with a care robot for months.
Another key distinction is whether the robot has a physical body or exists only on a screen. Studies consistently show that physically embodied robots are perceived differently than virtual agents displayed on monitors. People respond more strongly to a robot in the room with them, attributing more social presence and trustworthiness to it. This matters for designers deciding whether a task genuinely needs a physical robot or whether a screen-based assistant would work just as well.
Within these categories, the robot’s role varies widely. It might be a tool the human directly controls, a teammate sharing a task, a guide providing information in a museum, or a social companion offering emotional support. Each role demands different levels of autonomy, communication, and social skill from the robot.
What Makes People Trust a Robot
Trust is one of the most studied topics in HRI because it determines whether people will actually use robots in real settings. A meta-analysis of trust factors found that the robot’s own performance and attributes are the largest contributors to whether someone trusts it. In practical terms, this means a robot that reliably does what it’s supposed to do, and does it consistently, earns trust faster than one that looks friendly but makes errors.
Environmental factors also play a moderate role. A robot operating in a high-stakes environment like a hospital faces a higher trust threshold than one handing out flyers at a trade show. Human characteristics, including someone’s prior experience with technology, their age, and their personality, shape trust as well, but less than the robot’s actual behavior.
This finding has a clear design implication: making a robot look appealing matters, but making it perform reliably matters more. A charming robot that fumbles its task will lose trust quickly.
How Robots Sense and Respond to People
For a robot to interact with a person, it first needs to perceive them. Modern HRI systems combine multiple sensor types to build a picture of what a human is doing. Cameras paired with computer vision algorithms let robots recognize faces, track body movements, and interpret gestures like pointing or nodding. Multi-camera setups map 3D space so robots can navigate around people without collisions.
Gesture recognition is especially valuable in noisy environments where voice commands fail. A warehouse worker can direct a robot with hand signals instead of shouting over machinery. Smart wheelchairs illustrate the full sensor stack: they combine cameras, infrared sensors, lasers, and ultrasonic sensors to understand the user’s environment and navigate safely through it.
Speech recognition and natural language processing add another layer. Recent advances in vision-language-action models are changing the game here. These systems combine pretrained understanding of language and images with the ability to output motor commands. When someone tells a robot to “pick up the red cup near the keyboard,” the model already understands what cups look like, grasps spatial relationships like “near,” and connects the word “red” to visual features. This eliminates the need to program each task from scratch and lets non-engineers direct robots using ordinary language.
HRI in Surgery and Healthcare
One of the most demanding applications of human-robot interaction is robotic surgery. The goal of haptic (touch-based) feedback in these systems is “transparency,” where the surgeon feels as though their own hands are contacting the patient rather than operating a remote mechanism. Achieving this requires sensors on the robotic instruments to detect forces and textures, plus displays that relay that information back to the surgeon’s hands or eyes.
When direct force feedback isn’t feasible, designers use sensory substitution. A color map overlaid on the endoscopic camera view can show tissue stiffness across a surface, helping surgeons locate hard lumps more accurately. Audio cues and vibrotactile signals on the controller offer additional channels. The key design rule is that these visual overlays must not distract from the surgeon’s primary view of the patient.
Tactile displays are another frontier. Arrays of tiny pins, individually controlled by shape-memory alloys or pneumatic systems, can reproduce the sensation of touching tissue. Building these small enough and light enough to sit at the end of a surgical controller without interfering with movement remains a significant engineering challenge.
The Uncanny Valley Problem
People generally respond more positively to robots that look and behave somewhat like humans. But there’s a well-documented tipping point: when a robot becomes very humanlike but not quite right, people find it unsettling or even repulsive. This dip in acceptance is called the uncanny valley.
Brain imaging research has explored the neural mechanisms behind this reaction. Studies using artificial faces with subtly altered proportions and unnatural skin tones (resembling android robots without visible mechanical parts) found that participants reliably rejected these almost-human faces. The reaction isn’t just mild discomfort. It registers as genuine eeriness that can override the benefits of a humanlike appearance.
For designers, this creates a practical tradeoff. A robot that looks clearly mechanical but behaves socially can be more accepted than one that tries to pass as human and falls short. Many successful social robots, like those used in autism therapy or hotel concierge roles, lean into a friendly, cartoonish aesthetic rather than chasing realism.
Privacy and Ethical Concerns
Robots collect data to function. A service robot in a store might use facial recognition to personalize a customer’s experience. A home companion robot listens through microphones and watches through cameras to respond to its user. This data collection raises real privacy risks, and users often underestimate how much information a robot is actually gathering.
Research has found a particularly counterintuitive pattern: robots with more humanlike physical forms tend to decrease people’s privacy concerns compared to non-embodied systems like apps or smart speakers. In other words, people may let their guard down around a cute robot more than they would around a laptop running the same software, even though the robot is potentially capturing more data through its cameras and microphones.
The European Union’s General Data Protection Regulation (GDPR) applies directly to robots, requiring consent for data processing, anonymization of collected data, breach notifications, and safe cross-border data transfers. But regulation alone doesn’t solve the problem if users don’t understand what data is being collected in the first place. Clear disclosure, built into the interaction itself rather than buried in terms of service, is an area where HRI design and ethics intersect.
Safety Standards for Collaborative Robots
When robots share physical space with people, particularly in factories and warehouses, safety becomes non-negotiable. Two international standards form the backbone of collaborative robot safety. ISO 10218 covers general safety requirements for industrial robots, while ISO/TS 15066 adds specific requirements for collaborative systems where robots and humans work side by side without full physical separation.
These standards govern things like how much force a robot can exert on contact with a person, how quickly it must stop when a human enters its workspace, and what speed limits apply during collaborative tasks. Manufacturers design collaborative robots (often called cobots) to comply with these thresholds, using force-limiting joints and sensors that detect unexpected contact and trigger immediate stops.
Measuring the Human Side of HRI
Designing good human-robot interaction requires measuring how the human is actually experiencing it. Researchers use a combination of subjective, behavioral, and physiological methods. The NASA Task Load Index is one of the most common tools: it’s a questionnaire that captures how mentally demanding, physically demanding, and frustrating a person found a task. Alongside it, scales measuring negative attitudes toward robots and robot-specific anxiety help quantify a worker’s comfort level.
Behavioral measures add another layer. A common technique introduces a simple secondary task, like responding to beeps, alongside the primary robot interaction. When someone starts missing those beeps, it signals their mental workload is too high. Physiological monitoring goes even deeper: brain activity measured through EEG and blood-oxygen tracking through near-infrared sensors can predict when a worker’s cognitive load is climbing before they’re consciously aware of it. In one study, combining these brain signals achieved 77.8% accuracy in predicting missed secondary-task responses.
These measurement tools help designers identify where an interaction is causing unnecessary stress or confusion, then redesign the robot’s behavior, timing, or communication style to reduce that burden.

