High Impact Factor : 4.396 icon | Submit Manuscript Online icon |

Identification of Duplicated Data by using Fixed Size Chunking Algorithm

Author(s):

B. Vijay Kumar Naik , kMM Institute of PG Studies; Mrs C. Hemavathy, kMM Institute of PG Studies

Keywords:

Big data, Cloud computing, Data De-duplication, Storage Optimization, Stages in de-duplication

Abstract

From the past few years, there has been a fast progress within the cloud and big data, with the increasing type of firms using the resources from the cloud, it is important for shielding the information from completely different users, that are exploitation centralized resources. Every second millions of information is being generated because of the use of various new technologies like IOT and device. So it is very troublesome to store and handle such great deal of data. Many enterprise organizations unit investment numerous money to store such huge data for backup and disaster recovery purpose. But ancient backup resolution does not provide any facility of preventing the system from storing duplicate data, that may increase the storage value and backup time that in turn decreases the system performance. Fixed size chunking algorithm in data De-duplication is that the resolution for such disadvantage. It is a replacement rising technique that eliminates the duplicate or redundant data and stores entirely distinctive copy of data. So it reduces the storage utilization and worth of maintaining redundant data.

Other Details

Paper ID: IJSRDV7I10542
Published in: Volume : 7, Issue : 1
Publication Date: 01/04/2019
Page(s): 541-545

Article Preview

Download Article