The Growing Complexity of Everyday Devices

Ken Pfeuffer

It is now a normal part of everyday life that many of us interact with a wide variety of digital devices, often continuously. Smartphones, laptops, desktop PCs, tablets, smartwatches, TVs, smart glasses, XR headsets, video game controllers, smart speakers, and even cars and household robotic appliances—computing is everywhere. Not everyone uses all of these, but a substantial portion of the population uses several every single day. And given our growing need to access information, communicate efficiently, and automate daily tasks, this trend is unlikely to slow down. New devices will continue to appear, older ones may evolve or merge, but the overall direction is toward more systems, not fewer.
From a scientific and historical perspective, this is hardly surprising. Mark Weiser famously foresaw a shift in human–computer relationships across three eras. The first era, after World War II, featured large mainframe computers—expensive machines that many individuals shared, a 1:N relationship. Next, the personal computing era emerged in the 1980s, placing a computer in every office and home and allowing each person to have their own machine—a 1:1 relationship. Around the early 2000s, ubiquitous computing arrived, driven largely by mobile and wearable technology. This era supports a N:1 relationship: now each person is surrounded by many computers. Today, this stage has fully unfolded.
This shift has advantages. We can communicate instantly across the world. We can access information anywhere and in many forms. Right now, I can photograph my dog with my smartphone, share it with my partner who is working elsewhere, and then jump into social media or email—all within seconds. These are small personal examples, but echoed in countless ways globally.
However, alongside these benefits, there are concerns—such as ethical, ergonomic, social, psychological, and ecological. From a human–computer interaction (HCI) perspective, one issue in particular has been insufficiently recognized: the fragmentation of user interfaces across devices.
At its core, a user interface exists to enable efficient interaction between human and machine, through two key components: input and output. Input captures human action and translates it into commands; output communicates the system’s response. Today, these mechanisms vary drastically across devices.
Outputs are primarily visual—screens on phones, laptops, TVs, watches, VR headsets, or public information displays. In many situations, we are surrounded by multiple screens simultaneously.
Inputs are even more diverse. Common examples include:

  • Multi-touch on phones, tablets, and smartwatches
  • The mouse and keyboard on desktop systems
  • Styluses for design and drawing
  • Trackpads on laptops
  • Laser pointers for projection systems
  • 3D pointing devices like those used in Wii or VR systems
  • Hand tracking in modern XR devices
  • Eye-tracking for gaze-based interaction on systems like Apple Vision Pro
  • Buttons and gestures embedded in smart glasses, headphones, remotes, and IoT appliances
    This list is far from exhaustive, yet it reflects everyday reality for many users. The problem is not that these input and output techniques exist—they each serve useful purposes. The issue is that they exist in isolation. Each device typically assumes its own input method and its own output channel, forming a collection of 1:1 interaction silos: mouse for desktop, remote for TV, touchscreen for phone, hand gestures for XR, and so on.
    This was manageable when households interacted with only one or two devices—for example, a desktop PC and a television. But today, and even more in the future, this siloed landscape dramatically increases the complexity of everyday interaction. Users must learn not only how each system works but also manage the connections among them. Where is the remote? How do we connect the laptop to the conference room screen? Which app controls the smart light? The micro-frictions are small, but aggregated across the entire ecosystem, they represent a major and growing interaction burden.
    This is fundamentally the challenge of cross-device computing. HCI research has investigated this area for decades and has proposed multiple directions. Some solutions exist already—such as using smartphones to control TVs or other connected devices. But these approaches often introduce additional indirection: the user must locate the phone, open the right app, connect to the device, and only then interact.
    More radical paradigm shifts have also been proposed. Nic Marquardt’s work on Proxemic Interaction imagines systems that respond to physical presence and motion—approaching a device could automatically grant access to it. Other researchers envision XR headsets that allow spatial pointing at devices to control them from anywhere. These approaches highlight two major design strategies:
  • A universal controller, such as a smartphone or XR headset, that can operate all devices. This has practical appeal, but it creates dependency on a single device and risks discarding the unique affordances of existing input techniques. Losing or breaking the one universal controller would also eliminate access entirely.
  • More direct, physical interactions rooted in the real world. Spatial pointing and physical manipulation feel intuitive and build on the historical trend toward more “natural” input methods. However, relying exclusively on physicality can introduce fatigue, inefficiency, and logistical challenges. Indirect devices like the mouse or trackpad, while less natural, are fast, precise, and comfortable—and users value them for those reasons.
    Other approaches, such as context-aware computing (e.g., Dey, Gellersen, Schmidt), have shown promise and have influenced many modern mobile systems. However, context sensing has not yet become a dominant method for cross-device input. Explicit touch and gesture interactions remain far more common.
    In reflection, there appears to be a genuine technological and conceptual gap. The complexity of cross-device interaction is real, but because we live within it, we may not fully recognize its significance. Future generations may look back and wonder how we managed such a fragmented landscape of screens, controls, and interaction principles—much as we view early programming interfaces today as arcane.
    The key question becomes: What might address this problem going forward? Is the solution primarily technological, user interface–driven, hardware driven, or something else? The difficulty is compounded by the siloed strategies of major technology companies, each optimizing within its own ecosystem.
    Still, the first step is acknowledging the issue and exploring new directions. A future with N:N input/output connections—where any output can be controlled through any input method at hand—may represent the next major evolution. Achieving this, however, will require stepping beyond the current device-centric mindset and rethinking interaction at the ecosystem level, not the product level.

Leave a comment

Your email address will not be published. Required fields are marked *