Article Preview
TopIntroduction
In the National Institute for Fusion Science (NIFS), high temperature plasmas have been produced in the Large Helical Device (LHD) (Large Helical Device Project, n.d.) for nuclear fusion experiments (Takeiri et al, 2018). In these experiments, visible plasma light emission has been observed in the vacuum vessel during the plasma discharges. The images of the visible light have been recorded as videos and stored in disk storage in NIFS (Shoji, 2020; Shoji, Yamazaki, & Yamaguchi, 2000). A study and an observation of these recorded videos can lead to a new discovery or help to optimize the operational parameter of the experiment. Unusual light emission has been observed in the vacuum vessel in unusual plasma discharges, which can damage devices in the vessel. In order to guarantee safe operation in the experiments, the precognition of the unusual light emission in the plasma discharges was tried using a prediction method (Nakagawa, Hochin, Nomiya, Nakanishi, & Shoji, 2021). However, because of a very limited number of videos containing such unusual light emission, the generation of more videos having similar phenomena has been required for increasing the number of training data for the machine learning method.
Since Generative Adversarial Network (GAN) (Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, & Bengio, 2014) was introduced as a type of neural network architecture for generative modeling, many attempts have been made to generate videos with GAN. A method for generating a plausible video was proposed (Ohnishi, Yamamoto, Ushiku, & Harada, 2018). In the study, the motion and appearance information are considered as two significant components of a video, and a combined model of FlowGAN and TextureGAN is trained on datasets of human actions in order to produce a realistic motion video. On the other hand, in the case of plasma emission, the challenge in video generation is not only that each generated frame in a video should present a reasonable movement in comparison with its previous and next neighbors but also a smooth transition all over the frames is expected.
In addition, several methods for predicting an unusual plasma discharge (Nakagawa, Hochin, Nomiya, Nakanishi, & Shoji, 2021) or an occurrence of plasma disruptions (Yokoyama, Sueyoshi, Miyoshi, Hiwatari, Igarashi, Okada, & Ogawa, 2018) were proposed by using Support Vector Machine and Neural Network. In these studies, input features, which are visual information from video or experimental parameters, are prepared for the learning methods. However, it is unsure that these machine learning methods are effective for detecting a frame that happens just before a disruption or a sudden interruption in plasma video.
This study proposes a method for generating an unusual plasma discharge video by using the combination of a generative adversarial network, a classification model based on a neural network, and interpolation.
The remainder of the paper is structured as follows. Section 2 presents related works. Section 3 describes plasma videos treated in this paper. Section 4 proposes the method. Section 5 describes the experiment. Section 6 shows the results and some discussions. Finally, Section 7 provides the conclusion and suggestions for the future work.