Interaction /

How Will Mixed Reality and Advanced Displays Reshape Our World?

The emergence of mixed reality and innovative display technologies promises to dismantle boundaries separating the digital and physical into seamless ubiquitous computing. The conceptual foundation was laid in the early 1990s when pioneers like Paul Milgram and Fumio Kishino framed a reality-virtuality continuum spanning completely real to completely virtual environments. In between emerged the idea of mixed reality (MR) blending both in fluid ratios of augmented virtuality and augmented reality. This framework set the stage for dynamic, contextually-aware interfaces situated across reality and cyberspace. VR (virtual reality) and AR (augmented reality) offered the first stepping stones toward commonplace mixed reality experiences. The rise of interaction modalities like touchscreens, natural gestures and spatial mapping unlocked more intuitive navigation of blended spaces. As technologies mature, MR aims to make interfaces disappear into the fabric of reality. Displays overlay graphics onto everyday physical items and environments, touchable interfaces allow direct manipulation of virtual content, and spatial audio completes multi-sensory immersion. The next era nears of ubiquitously integrated reality-virtuality, where computing melts invisibly into the foreground of life itself.

SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces

SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces

Jun Rekimoto · 01/04/2002

The heart of this novel HCI research by Rekimoto lies in the concept of SmartSkin, a unique system that introduces new ways to interact with computing machines beyond traditional inputs. It's a paramount work in the area of gesture-based interaction, steering modern touchscreen technology.

  • SmartSkin: A system that incorporates capacitive sensing and machine learning to detect and interpret multiple hand gestures, marking a distinct shift from singular point interactions.
  • Capacitive Sensing: The technology that forms the foundation of SmartSkin, enabling the detection of finger positions and movements, even before physical contact with the surface.
  • Gesture-Based Interaction: SmartSkin allows for complex, multi-point interactions, providing an organic and intuitive user experience and signaling a departure from mouse and keyboard interfaces.
  • Machine Learning: Rekimoto used ML to process the sensor data and interpret gestures, pioneering its use in HCI interfaces for more adaptive, personalized user experiences.

Impact and Limitations: SmartSkin cultivated the path for contemporary touchscreen technology and gestural interfaces seen in systems like smartphone touchscreens and interactive kiosks. Nevertheless, issues like gesture ambiguity and 'Gorilla Arm Syndrome' remain unaddressed. Future research could focus on mitigating these challenges and refining gesture recognition algorithms.

Read more
A Taxonomy of Mixed Reality Visual Displays

A Taxonomy of Mixed Reality Visual Displays

Paul Milgram, Fumio Kishino · 01/12/1994

Paul Milgram and Fumio Kishino's "A Taxonomy of Mixed Reality Visual Displays" is a foundational paper that has significantly influenced the HCI field, particularly in the understanding and classification of mixed reality (MR) systems. Published in 1994, it predates the current boom in virtual and augmented reality, setting the stage for future research and development.

  • Reality-Virtuality Continuum: Milgram and Kishino introduce the Reality-Virtuality (RV) Continuum, a spectrum that ranges from a fully real to a fully virtual environment. For HCI practitioners, this provides a useful framework for conceptualizing and designing interfaces for mixed-reality experiences.
  • Display Taxonomy: The authors break down different kinds of displays into a clear taxonomy, enabling designers to more precisely specify and construct MR experiences. This taxonomy continues to inform how HCI specialists approach display technology in MR environments.
  • User Interaction: This paper raises the question of how users interact with MR displays, pushing the HCI community to think about novel interaction techniques. From gesture controls to gaze tracking, this opens a realm of exploration for interface design.
  • Spatial Relationships: The taxonomy also discusses spatial registration and interactions among real and virtual objects, which have implications for designing more intuitive and realistic MR experiences.

Impact and Limitations: This paper's impact has been long-lasting, helping shape the way we understand, design, and implement mixed-reality systems today. However, it was formulated at a time when MR technologies were in their infancy. The taxonomy may need to be updated to address newer technologies and interaction paradigms, such as brain-computer interfaces or advanced haptics.

Read more