摘要
HDFS faces several issues when it comes to handling a large number of small files. These issues are well addressed by archive systems, which combine small files into larger ones. They use index files to hold relevant information for retrieving a small file content from the big archive file. However, existing archive-based solutions require significant overheads when retrieving a file content since additional processing and I/Os are needed to acquire the retrieval information before accessing the actual file content, therefore, deteriorating the access efficiency. This paper presents a new archive file named Hadoop Perfect File (HPF). HPF minimizes access overheads by directly accessing metadata from the part of the index file containing the information. It consequently reduces the additional processing and I/Os needed and improves the access efficiency from archive files. Our index system uses two hash functions. Metadata records are distributed across index files using a dynamic hash function. We further build an order-preserving perfect hash function that memorizes the position of a small file's metadata record within the index file.
原文 | 英語 |
---|---|
頁(從 - 到) | 119-130 |
頁數 | 12 |
期刊 | Journal of Parallel and Distributed Computing |
卷 | 156 |
DOIs | |
出版狀態 | 已出版 - 10 2021 |
對外發佈 | 是 |
文獻附註
Publisher Copyright:© 2021 Elsevier Inc.