MapReduce is a promising parallel programming model for processing large data sets. Hadoop is an up-and-coming open-source implementation of MapReduce. It uses the Hadoop Distributed File System (HDFS) to store input and output data. Due to a lack of POSIX compatibility, it is difficult for existing software to directly access data stored in HDFS. Therefore, it is not possible to share storage between existing software and MapReduce applications.
In order for external applications to process data using MapReduce, we must first import the data, process it, then export the output data into a POSIX compatible file system. This results in a large number of redundant file operations. In order to solve this problem we propose using Gfarm file system instead of HDFS. Gfarm is a POSIX compatible distributed file system and has similar architecture to HDFS.
We design and implement of Hadoop-Gfarm plug-in which enables Hadoop MapReduce to access files on Gfarm efficiently. We compared the MapReduce workload performance of HDFS, Gfarm, PVFS and Gluster FS, which are open-source distributed file systems. Our various evaluations show that Gfarm performed just as well as Hadoop's native HDFS. In most evaluations, Gfarm performed bettar than twice as well as PVFS and Gluster FS.