Hadoop Perfect File: A fast and memory-efficient metadata access archive file to face small files problem in HDFS

Yanlong Zhai*, Jude Tchaye-Kondi, Kwei Jay Lin, Liehuang Zhu, Wenjun Tao, Xiaojiang Du, Mohsen Guizani

*此作品的通信作者

研究成果: 期刊稿件文章同行評審

19 引文 斯高帕斯(Scopus)

摘要

HDFS faces several issues when it comes to handling a large number of small files. These issues are well addressed by archive systems, which combine small files into larger ones. They use index files to hold relevant information for retrieving a small file content from the big archive file. However, existing archive-based solutions require significant overheads when retrieving a file content since additional processing and I/Os are needed to acquire the retrieval information before accessing the actual file content, therefore, deteriorating the access efficiency. This paper presents a new archive file named Hadoop Perfect File (HPF). HPF minimizes access overheads by directly accessing metadata from the part of the index file containing the information. It consequently reduces the additional processing and I/Os needed and improves the access efficiency from archive files. Our index system uses two hash functions. Metadata records are distributed across index files using a dynamic hash function. We further build an order-preserving perfect hash function that memorizes the position of a small file's metadata record within the index file.

原文英語
頁(從 - 到)119-130
頁數12
期刊Journal of Parallel and Distributed Computing
156
DOIs
出版狀態已出版 - 10 2021
對外發佈

文獻附註

Publisher Copyright:
© 2021 Elsevier Inc.

指紋

深入研究「Hadoop Perfect File: A fast and memory-efficient metadata access archive file to face small files problem in HDFS」主題。共同形成了獨特的指紋。

引用此