< Back to list

Experience 2.0 and Beyond - Engineering Cross Devices and Multiple Realities

conferencePaper

DOI:10.1145/3660515.3662838
Authors: Van Der Veer Gerrit Cornelis / Humayoun Shah Rukh / Memmesheimer Vera Marie / Ebert Achim

Extracted Abstract:

Today, augmented and mixed reality (AR/MR) are found very promis- ing. Even current handheld displays, such as smartphones and tablets, can provide a wide and low budget access to such appli- cations. New devices, like Apple’s Vision Pro, smart tattoos 1 [8], interactive clothing, and wearables 2 propose an even higher immer- sion and are opening totally new exciting worlds for researchers, developers, and users. However, current research in this area faces many challenges, e.g., suitable interaction techniques, better user experience, navigation in MR environments, high cross-device UX, etc. These challenges are still limiting the usage of AR/MR ap- plication in real world activities. Targeting these challenges, our workshop will provide a platform for researchers, developers, and professionals to discuss issues and define novel methods and ap- proaches suitable for developing the experience 2.0 and beyond: new interaction paradigms, user interfaces, 3D visualizations, and soundscapes, as well as applications for cross-device AR/MR. CCS CONCEPTS •Human-centered computing→Human computer interac- tion (HCI);Ubiquitous and mobile computing;Visualization. KEYWORDS Augmented reality; mixed reality; spatial computing; handheld displays; head-mounted displays; mobile devices; human-computer interaction; visualization; and soundscapes. ACM Reference Format: Gerrit van der Veer, Shah Rukh Humayoun, Vera Marie Memmesheimer, and Achim Ebert. 2024. Experience 2.0 and Beyond – Engineering Cross Devices and Multiple Realities. InCompanion of the16th ACM SIGCHI Sym- posium on Engineering Interactive Computing Systems (EICS Companion 1 https://www.microsoft.com/en-us/research/project/smart-tattoos/ 2 https://interactivewear.com.au/ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). EICS Companion ’24, June 24–28, 2024, Cagliari, Italy ©2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0651-6/24/06 https://doi.org/10.1145/3660515.3662838 ’24), June 24–28, 2024, Cagliari, Italy.ACM, New York, NY, USA, 3 pages. https://doi.org/10.1145/3660515.3662838 1 BACKGROUND Augmented and Mixed Reality (AR/MR) applications that extend the physical world with virtual components have been deemed a promising technology in various domains including education, healthcare, entertainment, and industry. Apart from special AR/MR head-mounted displays (HMDs), handheld displays (HHDs) such as smartphones and tablets offer a widely available and low budget access to AR/MR environments. Despite the huge potential assigned to these technologies, applications in the real world are still rare. In the context of AR/MR applications, a major challenge concerns the design and development of suitable interaction techniques. In contrast to head-mounted displays, tablets and smartphones were originally not designed for spatial user interfaces but for 2D appli- cations. For these 2D applications, touch-based input has proven to be very suitable and established as an intuitive input paradigm across many cultures and generations. However, independent of the device used, AR/MR applications usually require 3D input. To this end, previous research has proposed various interaction tech- niques including one-handed in-air gestures (e.g., [1]), advanced touch-input (e.g., [6]), or device-based input which maps the HHD’s position and orientation to virtual objects (e.g., [7]). These existing interaction techniques however come with different drawbacks including fatigue, occlusion, inaccuracy, and low learnability. Out- of-the-box interaction techniques for MR head-mounted displays (HMDs) typically rely on eye movement and hand gestures. Apart from research on advanced hand gestures (e.g., [2] and [12]), previ- ous work has also proposed input techniques based on foot move- ment (e.g., [11]), voice (e.g., [9]), or multimodal systems such as combining HMDs with smartphones or smartwatches [3]. While Apple’s Vision Pro attempts to reduce fatigue issues through hand tracking using downward-facing cameras, an easy-to-learn and accurate interaction method for 3D object manipulation (i.e., ob- ject translation and rotation) is still missing. Hand gestures, while considered a promising and intuitive input modality, have limited applicability for 3D manipulation because they are subject to the physical limitations of human hand and wrist movements. On top of that, existing interaction techniques are often developed for specific applications and hardware. However, to foster the application of 108 EICS Companion ’24, June 24–28, 2024, Cagliari, Italyvan der Veer et al. 2024 MR in real world settings, novel interaction paradigms are needed that are suitable for different devices and applications. Hence, future research in this area should focus more on the following questions: •Which of these interaction techniques (e.g., hand gestures, touch-gestures, facial gestures, device-based gestures, etc.) are suited best for which types of tasks, devices, and users? •How the state-of-the-art interaction techniques and approaches can be improved to make them well suited for new sets of devices such as Apple’s Vision Pro, interactive clothing, wear- ables, etc. • How can we effectively teach new interaction paradigms when they are very different from well-established touch- based techniques? •How can we guarantee the best possible usability across device classes? This includes, for instance, learnability – a user should not be forced to relearn all interaction paradigms just because changing from an HMD to a tablet. Another challenge is related to the small screen size of HHDs which restricts the user’s field of view in AR/MR applications. In this context, previous research has proposed different 3D visualizations for navigating to a point of interest that is currently out of view (e.g., [13]). While such visual cues can support navigation, they are also prone to visual clutter in more complex real-world settings. Hence, future work should deepen research toward appropriate navigation interfaces while considering the limited screen size. Like with HMDs, the application of HHDs will often include physical actions like movement: hand-movements guided by vision; holding the display at a location that is interesting for MR; searching or traveling guided by knowledge or expectation; walking the HHD to a location where VR or AR is expected to enrich information / understanding / experience. During these actions, the visualizations should provide supporting information. Targeting the above-mentioned concerns and challenges, we en- vision that the research must address the need to bring AR/MR ap- plications and technologies to the mass using HMDs but also HHDs due to their availability and usage in daily life. Our workshop will provide a platform for researchers, developers, and professionals to discuss issues and define novel methods and approaches suitable for developing new cross-device interaction paradigms, user inter- faces, 3D visualizations, and applications for AR/MR. Moreover, it will also be discussed how to make real-time AR/MR applications more intuitive and usable so they can be used in different domains. Researchers and practitioners are invited to submit contributions in- cluding problem statements, technical solutions, experience reports, planned work, and vision papers. 2 WORKSHOP TOPICS The workshop will be dedicated to observations, concepts, ap- proaches, techniques, development, and practice that allow under- standing, facilitating, and increasing the advancements of AR/MR applications. Topics of interest for position paper submission in- clude, but are not limited to: •AR/MR applications for handheld devices and head-mounted displays •User interface design for AR/MR applications in handheld devices and head-mounted displays •New interaction techniques and modalities for AR/MR •3D Visualization for AR/MR applications •Cross-device frameworks for AR/MR •User experience in AR/MR •Navigation in AR/MR •Methodologies, frameworks, concepts, and tool support for cross-device AR/MR •Evaluation and user studies •Case studies and best practices •Translation between senses [10] •Multimodal interactive art [5] 3 WORKSHOP FORMAT AND REQUIRED SERVICES 3.1 Target Audience and Pre-workshop Plans The intended audience of the workshop is a mix of industrial and academic participants with experience in multi-media and/or multi- modal engineering or art. We expect around 15 active participants. Submitted position papers should be 4 to 8 pages long (excluding references) using the ACM Master Article Submission Templates (single column). The organizers will carefully review the submit- ted papers. The submissions will be reviewed based on workshop relevance, academic rigor, innovation, and industrial or artistic ap- plicability. Accepted position papers will be available two weeks before the workshop date at the workshop website, and all partici- pants will be assumed to have read these before the meeting. 3.2 Workshop Format and Activities The workshop will have a strong focus on discussions and inter- actions between all accepted participants. The workshop will be held as a full-day synchronous workshop and consist of four 1.5 hours sessions: The workshop day will start with a keynote by a pioneer in the field of computer art or gaming, 30 minutes plus 10 minutes discussion. After that, each participant will briefly summa- rize her/his position (1 PPT slide, 3-5 min). The remaining sessions will start by a brief

Level 1: Include/Exclude

  • Papers must discuss situated information visualization* (by Willet et al.) in the application domain of CH.
    *A situated data representation is a data representation whose physical presentation is located close to the data’s physical referent(s).
    *A situated visualization is a situated data representation for which the presentation is purely visual – and is typically displayed on a screen.
  • Representation must include abstract data (e.g., metadata).
  • Papers focused solely on digital reconstruction without information visualization aspects are excluded.
  • Posters and workshop papers are excluded to focus on mature research contributions.
Show all meta-data