Multimedia Human-Computer Interaction Method in Video Animation Based on Artificial Intelligence Technology

Multimedia Human-Computer Interaction Method in Video Animation Based on Artificial Intelligence Technology

Linran Sun, Nojun Kwak
DOI: 10.4018/IJITWE.344419
Article PDF Download
Open access articles are freely available for download

Abstract

With the development of computer technology innovation, be able to deal with the media comprehensive information and real-time information interaction with the computer multimedia technology arises at the historic moment, it promotes the application fields of computer widen to industrial all aspects of life. As the product of digital technology, animation technology plays an irreplaceable role in the production of multimedia courseware. However, the existing human-computer interaction methods have shortcomings such as incomplete extraction of video features and poor human-computer interaction effect. In this context, this paper designs a multimedia human-computer interaction method for animation works based on CNN model. First of all, the original video data is collected and preprocessed. Then it is input into the HCI framework based on CNN model for feature extraction. Finally, the effectiveness and practicability of the proposed method are proved by simulation experiments, which provides a reference and basis for the research of modern human-computer interaction.
Article Preview
Top

With the development of a variety of interactive design platforms and software, micro-videos with interactive functions have emerged in an endless stream. The development of the internet has promoted the development of micro-video with interactive functions in the field of internet advertising, television, and education. At present, micro videos integrated with interaction design have been spread on the network. For example, interaction design is applied in interactive movies, where the audience can choose the beginning and end of the movie by themselves. In such a case video has to adapt to multiple interactions in a video broadcast platform; in addition, an online teaching platform also provides learners with a combination of interaction design technology in courses with a series of short videos. Mainly through human interaction, learners interact with content and other learners using the interface interaction design, in such concrete forms as barrage, embedded, hyperlinks, etc. (Tian & Tsai 2021). Scholars in this field, as shown in Table 1, have made progress, which can be divided into two aspects: interaction design development in micro-video and application research of interaction design in micro-video (Gao, 2018). Interaction design-related technologies developed earlier in foreign countries, with high maturity, and are applied mostly in human-computer interaction, APP design, digital media, and other commercial fields. The integration of interaction design in different fields aims to enhance users' positive experience and create easy-to-use interactive products to a certain extent. Interaction design was first used in advertising to enhance public participation in products and attract consumers, so as to achieve the ideal marketing purpose.

Table 1.
Research Topics and Methods of Related Research
Author/YearResearch TopicMethod/Model
Wu et. al., 2015
Video classification
Hybrid deep learning framework with two CNNs for spatial and short-term motion features
Yao et. al., 2015
Video description generation
Approach considering both local and global temporal structure
Xiong et. al., 2017
Crowd counting
ConvLSTM model to exploit temporal information
He et. al., 2019
Video spatial-temporal modeling
Novel spatial-temporal network (StNet) architecture
Isobe et. al., 2020
Video super-resolution
Comparison of 2D CNN, 3D CNN, and RNN for temporal modeling
Fu et. al., 2021
Video language modeling
VIOLET network with Masked Visual-token Modeling pre-training task
Nir et. al., 2022
Semantic representation for cartoons/animation videos
Method for refining semantic representation for specific animated content
Wang et. al., 2022
Video spatial-temporal modeling
Video Mobile-Former with 3D-CNNs and Transformer modules
Li et. al., 2022
Video matting
VMFormer, a Transformer-based end-to-end method
Zhao et. al., 2022 Human modeling and renderingComprehensive neural approach based on dense multi-view videos

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing