FAQ: How do I configure memory parameters for my mapreduce jobs?

Answer:

Memory Parameters:

There are several memory parameters configurable for users:
  • mapreduce.{map|reduce}.memory.mb: It specifies how much virtual memory the entire process tree of each map/reduce task will use. If a task requires more virtual memory for its entire tree, one should set this option. The value is given in MB. For example, to set the limit to 1G, the option should be set to 1024. Currently, the max allowable virtual memory for each task is 4GB (one can look at the parameter "mapred.cluster.max.{map|reduce}.memory.mb").
  • mapreduce.{map|reduce}.java.opts: It specifies the task's Java heap space. To the set the upper limit, one can use JVM option -Xmx. For example, to use 1G of heap space, the option should be passed in as -Xmx1024m. Note that these JVM options are also passed to the child JVM processes. Java heap space is only part of the virtual memory space of JVM process. Thus, mepreduce.{map|reduce}.memory.mb should be larger than mapreduce.{map|reduce}.java.opts.
Memory Scheduling:

In OpenCloud, we install the Capacity Scheduler to schedule memory jobs. The Capacity Scheduler will consider the memory requirements to schedule MapReduce jobs.

Firstly, in current setting, we have 6 mapper task slots and 4 reducer task slots.

For each mapper/reducer task slot, the default memory limit for it is specified in mapred.cluster.{map|reduce}.memory.mb, and currently both are 2048mb.

So for a task which requires more memory, it will occupy more slots. The number of task slots a task requests in order to be scheduled is:

ROUNDUP( mapred.job.map.memory.mb / mapred.cluster.map.memory.mb )

ROUNDUP( mapred.job.reduce.memory.mb / mapred.cluster.reduce.memory.mb )

Memory over-commit and solution:

In Opencloud, resources over-allocation will happen in some cases. For each node in Opencloud, the current limit of total virtual memory of user processes is 19.2GB(physcal memory * overcommit ratio + swap space = 16*0.7+8).The Hadoop MapReduce jobs may over use memory resource and cause job failures.Here is the case. In Opencloud, the mapreduce can run up to 6 map task slots and 4 reduce task slots.Each task slot can use up to 2GB memory.Suppose the machine runs 6 map tasks and 4 reduce tasks,and each of them consume 2GB,then the total memory consumed by the tasks is 16GB.Since there are also other processes running in the machine (likeTaskTracker, DataNode) and the kernel which sometimes consumes more than 4GB,the total virtual memory consumed by all processes are more than 20GB.In this case, OS kernel will start killing process or the maper/reducer taskscannot allocate memory for their children processes.

One quick solution is to change the parameters in the job configuration file.The idea is to request more resource but use less resource.Suppose for each mapper/reducer task, you ask for 4GB for each task.(That is to set "mapred.job.map.memory.mb" and "mapred.job.reduce.memory.mb"to 4096).And for each mapper/reducer task, you limit its heap space to be 1600MBso that the process may consume only around 2GB.(This is to set "mapred.child.java.opt" to be "-Xmx1500M").So now each machine runs at most 3 mapper tasks and 2 reducer tasks.All of them consume at most 10GB, which will not use up all virtual memory.In addition, it also needs to disable the JVM reusing in Hadoop MapReduce.Otherwise, finished task JVM processes are not killed which still occupythe virtual memory space. The downside of this solution is that it allows fewer tasks to run simultaneously, and so the job runs longer.

Back to: CloudClusterFAQ

-- KaiRen - 11 Mar 2011
Topic revision: r2 - 11 Mar 2011, KaiRen - This page was cached on 27 Jun 2017 - 13:34.

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding PDLWiki? Send feedback