[PDF][PDF] Comparative analysis for content defined chunking algorithms in data deduplication

D Viji, S Revathy - Webology, 2021 - researchgate.net
Webology, 2021researchgate.net
Data deduplication works on eliminating redundant data and reducing storage consumption.
Nowadays more data generated and it was stored in the cloud repeatedly, due to this large
volume of storage will be consumed. Data deduplication tries to reduce data volumes disk
space and network bandwidth can be to reduce costs and energy consumption for running
storage systems. In the data deduplication method, data broken into small size of chunk or
block. Hash ID will be calculated for all the blocks then it's compared with existing blocks for …
Abstract
Data deduplication works on eliminating redundant data and reducing storage consumption. Nowadays more data generated and it was stored in the cloud repeatedly, due to this large volume of storage will be consumed. Data deduplication tries to reduce data volumes disk space and network bandwidth can be to reduce costs and energy consumption for running storage systems. In the data deduplication method, data broken into small size of chunk or block. Hash ID will be calculated for all the blocks then it’s compared with existing blocks for duplication. Blocks may be fixed or variable size, compared with a fixed size of block variable size chunking gives a better result. So the chunking process is the initial task of deduplication to get an optimal result. In this paper, we discussed various content defined chunking algorithms and their performance based on chunking properties like chunking speed, processing time, and throughput.
researchgate.net
以上显示的是最相近的搜索结果。 查看全部搜索结果