Exploring Enhancement of AR-HUD Visual Interaction Design Through Application of Intelligent Algorithms

Exploring Enhancement of AR-HUD Visual Interaction Design Through Application of Intelligent Algorithms

Jian Teng, Fucheng Wan, Yiquan Kong
DOI: 10.4018/IJITSA.326558
Article PDF Download
Open access articles are freely available for download

Abstract

This study aims to optimize the visual interaction design of AR-HUD and reduce cognitive load in complex driving situations. An immersive driving simulation incorporating eye-tracking technology was utilized to analyze objective physiological indices and measure subjective cognitive load using the NASA-TLX. Additionally, a visual cognitive load index was integrated into a BP-GA neural network model for load prediction, enabling the derivation of an optimal solution for AR-HUD design. The optimized AR-HUD interface demonstrated a significant reduction in cognitive load compared to the previous prototype. The experimental group achieved a mean total score of 25.63 on the WP scale, whereas the control group scored 43.53, indicating a remarkable improvement of 41.1%. This study presents an innovative approach to optimizing AR-HUD design, effectively reducing cognitive load in complex driving situations. The findings demonstrate the potential of the proposed algorithm to enhance user experience and performance.
Article Preview
Top

Introduction

Augmented Reality-Head Up Display (AR-HUD) technology has gained considerable traction in the field of driving assistance due to its capacity to present dashboard information while allowing the driver to maintain their focus on the road ahead. By overlaying virtual information onto the driver's field of view, the transparent AR-HUD display offers vital data such as speed, navigation instructions, and vehicle alerts. Consequently, this technology holds the potential to enhance driving safety by alleviating cognitive load (Cao et al., 2022). The realm of interactive design for AR-HUD systems has witnessed substantial advancements in recent years, and it has had a primary objective of enhancing user experience and minimizing distractions during driving. A pivotal challenge in AR-HUD system design revolves around presenting information in a manner that is easily comprehensible and that does not divert the driver's attention from the road. To address this challenge, researchers have devised various interactive design strategies tailored specifically for AR-HUD systems. One prominent strategy involves employing color and brightness adjustments to emphasize crucial information while mitigating distractions. By utilizing appropriate color schemes and brightness levels, the display can become more user-friendly and visually appealing. For instance, employing visually comfortable colors like blue and green (Gabbard, J. L. et al., 2020) can effectively present information in a way that does not monopolize the driver's attention. Another crucial facet of AR-HUD interactive design is the integration of audio and haptic feedback. Audio feedback enables the provision of important updates, such as speed or navigation information, to the driver without necessitating visual diversion. Similarly, haptic feedback, in the form of vibrations or touch, offers a tangible response to significant cues like speed limit alerts or lane changes. Moreover, the incorporation of machine learning algorithms has emerged as a valuable component in AR-HUD interactive design. These algorithms can analyze the driver's behavior and context to anticipate what information they might need to see next. This predictive capability allows the system to adjust the display content in real-time, presenting the most relevant information in the most effective way. This not only reduces cognitive load but also enhances driving safety. Additionally, the use of machine learning algorithms improves the accuracy of the AR-HUD system, thereby reducing the likelihood of errors or false alerts. In recent years, a noticeable trend has emerged in the field of AR-HUD systems that focuses on the development of highly customizable and personalized solutions. These advanced systems grant drivers the ability to tailor the display to their specific preferences and driving habits. Customization options encompass adjustments in information size and position and provide the freedom to select which information to display. By affording such customization features, AR-HUD systems effectively mitigate cognitive load by presenting solely pertinent information to the driver, thus optimizing the user experience (Zhang & Zhou, 2018).

The study is organized as follows: The introduction section (Section 1) provides a comprehensive overview of the research topic, outlining the underlying motivations driving the study. In the related work section (Section 2), an extensive review of pertinent literature and previous studies related to AR-HUD interfaces and cognitive load is presented. The methods section (Section 3) delineates the theoretical framework of AR-HUD visual perception intensity, expounds upon the design of the HCI prototype for the AR-HUD experiment, elucidates the experimental equipment employed, describes the experimental methodology implemented, and outlines the data collection methods employed. Moving forward, the results section (Section 4) delineates the outcomes derived from the eye-tracking experiment, details the algorithm integrating GA and BPNN, presents the optimized mathematical model predicated on cognitive load and visual intensity, elucidates the case study involving the AR-HUD interface, and provides in-depth insights into the chromosome coding, topological structure, model parameters, and genetic algorithm utilized in the neural network model. The discussion section (Section 5) critically evaluates the limitations inherent in the current research and proposes potential avenues for future investigation and improvement. Lastly, the conclusion section (Section 6) succinctly summarizes the pivotal findings of the study and offers concluding remarks to encapsulate the overall research endeavor.

Complete Article List

Search this Journal:
Reset
Volume 17: 1 Issue (2024)
Volume 16: 3 Issues (2023)
Volume 15: 3 Issues (2022)
Volume 14: 2 Issues (2021)
Volume 13: 2 Issues (2020)
Volume 12: 2 Issues (2019)
Volume 11: 2 Issues (2018)
Volume 10: 2 Issues (2017)
Volume 9: 2 Issues (2016)
Volume 8: 2 Issues (2015)
Volume 7: 2 Issues (2014)
Volume 6: 2 Issues (2013)
Volume 5: 2 Issues (2012)
Volume 4: 2 Issues (2011)
Volume 3: 2 Issues (2010)
Volume 2: 2 Issues (2009)
Volume 1: 2 Issues (2008)
View Complete Journal Contents Listing