Telepresence is technology designed to make you feel physically present in a place you’re not. Unlike a standard video call, where you’re clearly watching someone on a screen, telepresence aims to eliminate that sense of distance entirely. The person across the room appears life-size, positioned at eye level, as if they’re sitting right across the table from you. The concept applies far beyond meetings, though. It spans robotic surgery, space exploration, and emerging 3D displays that project holographic images without a headset.
How Telepresence Differs From Video Calls
The easiest way to understand telepresence is to compare it to something familiar. On a regular video call, you see a face in a small rectangle on your laptop or phone. The person looks miniaturized, off-center, and flat. You’re clearly looking at a screen. Telepresence systems flip every one of those limitations. Large screens display remote participants at 100% full size, positioned at eye level, so in-person attendees feel like they’re sitting directly across the table from someone who is actually hundreds of miles away.
This distinction is architectural, not just cosmetic. A telepresence room is purpose-built: the furniture, lighting, camera angles, and screen placement are all calibrated so that both sides of a conversation share what feels like one continuous space. A video call on a laptop or phone will never be telepresence, no matter how sharp the camera is. The difference is whether the technology disappears from your awareness or stays front and center.
Where the Idea Came From
The term was coined by MIT researcher Marvin Minsky in 1980. His original vision wasn’t about meetings at all. He was describing teleoperation systems: remote-controlled robotic hands that would let a person manipulate physical objects from a distance while feeling as though they were touching those objects directly. The concept has since expanded to cover any technology that creates a convincing sense of “being there,” whether that means sitting in a boardroom, operating on a patient, or driving a rover on another planet.
What Makes It Feel Real
Two categories of factors determine whether telepresence actually works: the technology and the person using it.
On the technical side, latency is the most critical variable. Telepresence systems require a round-trip delay of no more than about 150 milliseconds for the experience to feel natural and lag-free. Standard video conferencing tolerates 400 to 450 milliseconds, which is fine for a casual call but creates a noticeable disconnect for immersive experiences. Bandwidth matters too. Each 4K video stream needs a minimum of 25 Mbps, and multi-screen setups multiply that requirement quickly.
On the human side, your brain has to cooperate. Research on the psychology of presence shows that individual traits, particularly a person’s ability to become absorbed in an experience and their willingness to suspend disbelief, directly influence how “real” a telepresence session feels. Two people in the same room with the same equipment can have meaningfully different experiences based on these cognitive tendencies. The technology sets the ceiling, but your brain determines how close you get to it.
Telepresence in Surgery
Robotic surgery systems are one of the most high-stakes applications of telepresence. A surgeon sits at a console, sometimes in a different room or a different city, and controls robotic arms that perform the actual procedure. These systems dramatically enhance precision, dexterity, and visualization compared to traditional techniques.
The major limitation right now is touch. Current commercially available surgical robots don’t provide meaningful haptic feedback, meaning the surgeon can see what the instruments are doing but can’t feel the resistance of tissue or the tension of a suture. This gap has real consequences. In cardiac surgery, fine sutures are frequently broken and delicate tissues torn because the surgeon can’t sense how much force the robot is applying. Research at Johns Hopkins and other institutions has shown that adding force feedback reduces tissue-damaging errors by a factor of three. Surgeons consistently describe the loss of tactile sensation as the single biggest restriction of current robotic systems, alongside a steep learning curve.
Restoring that sense of touch is one of the most active areas of development in medical telepresence. Beyond simply preventing errors, haptic feedback opens up possibilities like “virtual fixtures,” where the system actively guides or constrains a surgeon’s movements to improve safety during complex tasks.
Space Exploration and Telerobotics
NASA uses telepresence to extend human reach without putting astronauts in danger. The agency’s Human Exploration Telerobotics project places robots in environments where humans can’t easily go, then lets operators control them remotely with immersive feedback.
Two of these robots, Robonaut 2 and a system called SPHERES (Synchronized Position Hold, Engage, Reorient Experimental Satellites), operate aboard the International Space Station. Others, like the K10 planetary rover, are tested at NASA field centers on Earth. During one demonstration aboard the station, astronaut Chris Cassidy wore a vest, gloves, and visor to telerobotically control Robonaut 2’s movements, testing whether astronauts in orbit could effectively operate robots on a planetary surface below them. The core challenge is the same one that affects all telepresence: latency. Signal delays between Earth and Mars, for example, can stretch to over 20 minutes each way, which makes real-time control impossible and requires a fundamentally different approach to remote operation.
Holographic and Light Field Displays
The next frontier for telepresence is moving beyond flat screens entirely. Light field displays project dozens of simultaneous perspectives of 3D content, creating genuine depth and parallax (the effect where objects shift as you move your head, just like in real life) without requiring headsets, eye tracking, or any wearable device.
Current commercial light field displays can generate up to 100 perspectives at 60 frames per second across a 53-degree viewing angle, with up to 9 inches of virtual depth on a 4K OLED panel. Multiple people can view the same 3D image simultaneously, each seeing it from their own natural angle. This is a significant shift from virtual reality headsets, which isolate each user in a private experience. Light field technology brings 3D content into a shared physical space, which is closer to what most people imagine when they hear the word “telepresence”: not strapping on goggles, but looking across the room and seeing someone who appears to actually be there.
The gap between current light field displays and true holographic telepresence, where a full-size human figure appears in three dimensions in your room, remains substantial. But the trajectory is clear. Each generation of display reduces the compromises, pushing closer to the experience Minsky imagined in 1980: technology so convincing that the distance between two people simply stops mattering.

