Article Preview
TopIntroduction
Process Mining (W. van der Aalst, 2016) is a new scientific discipline that enables the extraction of knowledge from data occurring in the modern data industry, for example, records historical completion data. Mining techniques are typically used to discover, monitor, and refine processes by applying multiple processes to the production of event logs by completion (W. van der Aalst et al, 2012). Process exploration, relationship testing and process improvement are the three main functions of the mining process, which are widely used in industrial processes in many applications such as finance, transportation and healthcare. Several authors in the literature on the application of mining techniques in production processes (Demtroeder et al., 2019) (Lechner, 2020). compare other application areas such as sales, purchasing, marketing and insurance. In addition, existing work focuses on the application of the data from the solution (Reinkemeyer, 2020).
Many strategies have been proposed to remove these images from performance. However, most existing processes only delete simple templates or templates that are limited to only a specific set of manually selected events. Recent studies have shown that patterns can be identified in a language (Ammons et al, 2002). In this way, patterns can be seen as normal or in restricted mode, and mining patterns can be seen as a language learning problem.
The current method is of equal importance. Each uses dynamic effects or metafiles as input to the operating system and creates one or more internal languages that represent the model or function. However, personal solutions differ in the main way.
In this article, all the new methods of removing patterns that address the various limitations of the current process are presented. the understanding is twofold. First of all, it can be seen that the smaller models can be likened to larger models. Second, micro-model configurations can be integrated. Then using that knowledge to divide the job into two parts. First, techniques to isolate two of the smaller models are used. They are integrated into a finite state automaton algorithm and some special rules used by M. Gabel and Z.Su (Gabel et al, 2008) extractable by token algorithm. (Gabel & Su,., 2008).
Finally, the Hadoop MapReduce framework is used to extract and create models. These models are presented as finite state automata or continuous expressions. This chapter uses the same token extraction algorithm, however, subtracts the small sample in parallel with the map pitch and counts the small sample by the large sample in parallel with the reduced pitch. the mission is to improve the isolation models and procedures that can be used in the onboarding process.
This method is used in Java programming language with two log files for the two programs. The size of the first data wheel is 20 GB and the size of the second data is 30 GB, which is made by the manufacturer of the wheel. The 3-group method in the cloud are tested and provided with the first group of 5 machines, a second group of 10 machines, and a third group of 20 more machines. The effectiveness of the service is call, response, message ... another one.