Article Preview
TopIntroduction
Augmented Reality-Head Up Display (AR-HUD) technology has gained considerable traction in the field of driving assistance due to its capacity to present dashboard information while allowing the driver to maintain their focus on the road ahead. By overlaying virtual information onto the driver's field of view, the transparent AR-HUD display offers vital data such as speed, navigation instructions, and vehicle alerts. Consequently, this technology holds the potential to enhance driving safety by alleviating cognitive load (Cao et al., 2022). The realm of interactive design for AR-HUD systems has witnessed substantial advancements in recent years, and it has had a primary objective of enhancing user experience and minimizing distractions during driving. A pivotal challenge in AR-HUD system design revolves around presenting information in a manner that is easily comprehensible and that does not divert the driver's attention from the road. To address this challenge, researchers have devised various interactive design strategies tailored specifically for AR-HUD systems. One prominent strategy involves employing color and brightness adjustments to emphasize crucial information while mitigating distractions. By utilizing appropriate color schemes and brightness levels, the display can become more user-friendly and visually appealing. For instance, employing visually comfortable colors like blue and green (Gabbard, J. L. et al., 2020) can effectively present information in a way that does not monopolize the driver's attention. Another crucial facet of AR-HUD interactive design is the integration of audio and haptic feedback. Audio feedback enables the provision of important updates, such as speed or navigation information, to the driver without necessitating visual diversion. Similarly, haptic feedback, in the form of vibrations or touch, offers a tangible response to significant cues like speed limit alerts or lane changes. Moreover, the incorporation of machine learning algorithms has emerged as a valuable component in AR-HUD interactive design. These algorithms can analyze the driver's behavior and context to anticipate what information they might need to see next. This predictive capability allows the system to adjust the display content in real-time, presenting the most relevant information in the most effective way. This not only reduces cognitive load but also enhances driving safety. Additionally, the use of machine learning algorithms improves the accuracy of the AR-HUD system, thereby reducing the likelihood of errors or false alerts. In recent years, a noticeable trend has emerged in the field of AR-HUD systems that focuses on the development of highly customizable and personalized solutions. These advanced systems grant drivers the ability to tailor the display to their specific preferences and driving habits. Customization options encompass adjustments in information size and position and provide the freedom to select which information to display. By affording such customization features, AR-HUD systems effectively mitigate cognitive load by presenting solely pertinent information to the driver, thus optimizing the user experience (Zhang & Zhou, 2018).
The study is organized as follows: The introduction section (Section 1) provides a comprehensive overview of the research topic, outlining the underlying motivations driving the study. In the related work section (Section 2), an extensive review of pertinent literature and previous studies related to AR-HUD interfaces and cognitive load is presented. The methods section (Section 3) delineates the theoretical framework of AR-HUD visual perception intensity, expounds upon the design of the HCI prototype for the AR-HUD experiment, elucidates the experimental equipment employed, describes the experimental methodology implemented, and outlines the data collection methods employed. Moving forward, the results section (Section 4) delineates the outcomes derived from the eye-tracking experiment, details the algorithm integrating GA and BPNN, presents the optimized mathematical model predicated on cognitive load and visual intensity, elucidates the case study involving the AR-HUD interface, and provides in-depth insights into the chromosome coding, topological structure, model parameters, and genetic algorithm utilized in the neural network model. The discussion section (Section 5) critically evaluates the limitations inherent in the current research and proposes potential avenues for future investigation and improvement. Lastly, the conclusion section (Section 6) succinctly summarizes the pivotal findings of the study and offers concluding remarks to encapsulate the overall research endeavor.