Article Preview
Top1. Introduction
Cloud computing has collected extent concentration from communities of research and industry development, a large computing resources using virtualized sharing method like auditing, storage, processing power, applications and services (Dadzie, 2019). Outsourced file of the cloud user can easily tampered as it is stored at the third party service providers databases, so there is no integrity of cloud users data as it has no control on their data, therefore providing security assurance to the users data has become one of the primary concern for the cloud service providers. Cloud servers are not responsible for any data loss as it doesn’t provide the security assurance to the cloud user data. Remote data integrity checking licenses an information to data storage server, to determine that it is really storing an owners data truthfully.
Cloud computing encourages users to outsource their data to cloud storage. Data outsourcing means that users lose physical autonomy on their own data, which makes remote data integrity verification become a critical challenge for potential cloud users. The user burden is released by frequent integrity verifications, Third Party Auditor (TPA) is introduced to perform verifications on behalf of user for data integrity assurance. Existing public auditing schemes rely on the assumption that TPA is trusted, thus these schemes cannot be directly extended to support the outsourced auditing model, where TPA might be dishonest and any two of the three involved entities might be in collusion. In this paper, we propose a dynamic outsourced auditing scheme which cannot only protect against any dishonest entity and collision, but also add the intruder details to the database with the IP address. We present a new approach, where injected files are recovered by creating a log file separately. The batch-leaves-authenticated Merkle Hash Tree (MHT), verify multiple leaf nodes and their own indexes all together, which is more appropriate for the dynamic outsourced auditing system. Experimental results show that our advanced security system minimizes the costs of initialization for both user and TPA, and incurs a lower price of dynamism at user side and provide dual security with various security keys generated (Rao et al., 2017).
Resource scheduling in cloud is a challenging job and the scheduling of appropriate resources to cloud workloads depends on the QoS requirements of cloud applications. In cloud environment, heterogeneity, uncertainty and dispersion of resources encounters problems of allocation of resources, which cannot be addressed with existing resource allocation policies. Researchers still face troubles to select the efficient and appropriate resource scheduling algorithm for a specific workload from the existing literature of resource scheduling algorithms. This research depicts a broad methodical literature analysis of resource management in the area of cloud in general and cloud resource scheduling in specific (Singh & Chana, 2016).
The recent developments in the field of could computing have immensely changed the way of computing as well as the concept of computing resources. In a cloud-based computing infrastructure, the resources are normally in someone else's premise or network and accessed remotely by the cloud users (Petre, 2012; Ogigau-Neamtiu, 2012; Singh & Jangwal, 2012).
Processing is done remotely implying the fact that the data and other elements from a person need to be transmitted to the cloud infrastructure or server for processing; and the output is returned upon completion of required processing. The ideal place to analyze most IoT data is near the devices that produce and act on that data. We call it Fog computing. Any device with computing, storage, and network connectivity can be a fog node.
In some cases, it might be required or at least possible for a person to store data on remote cloud servers. These gives the following three sensitive states or scenarios that are of particular concern within the operational context of cloud computing:
- •
The transmission of personal sensitive data to the cloud server
- •
The transmission of data from the cloud server to clients' computers
- •
The storage of clients’ personal data in cloud servers which are remote server not owned by the clients
The Forrester research says “The global cloud computing market is going to reach $241 Billion in 2020 as that of $40.7 Billion in 2010”.