2

I just want to ask your opinion about HDFS block size. So I set HDFS block size to 24 MB and it's can run normally. I remember that 24 MB is not an exponential number (multiplication of 2) for the usual size on computer. So I want to ask all of you, what's your opinion with 24 MB?

Thanks all....

2 Answers2

1

Yes. It is possible to set HDFS block size to 24 MB. Hadoop 1.x.x default is 64 MB and that of 2.x.x is 128 MB.

On my opinion increase the block size. Because, the larger the block size, less time will be utilized at the reducer phase. And things will speed up. However, if you reduce the block size, less time will be spent at each map phase, but chance are there that more time will be utilized at the reduce phase. Thereby increasing the overall time.

You can change the block size using the below command while transfereing from Local File System to HDFS:

hadoop fs -D dfs.blocksize=<blocksize> -put <source_filename> <destination>

Permanent change of block size can be made by changing the hdfs-site.xml to the below one:

<property> 
<name>dfs.block.size<name> 
<value>134217728<value> 
<description>Block size<description> 
<property>
V Sree Harissh
  • 663
  • 5
  • 22
  • but I not write it on bytes, but I write it like this 28m, Because HDFS support to write with that suffix. And how can you explain the block size not on exponential number? Thanks – Kenny Basuki Jun 11 '15 at 18:07
0

Yes, It is possible to set block size in the Hadoop environment. Simply go to /usr/local/hadoop/conf/hdfs-site.xml then change block size value Refer: http://commandstech.com/blocksize-in-hadoop/