java - How to limit a Hadoop MapReduce job to a certain number of nodes? -
so, have system 4 data nodes. however, check scalability of hadoop application, want test 1, 2 , 4 nodes. so, how can limit number of nodes used hadoop 1 or 2. using hadoop 2.5.1 , don't have admin rights system. moreover, how can control number of cores used hadoop node?
you need admin rights that
how can limit number of nodes used hadoop 1 or 2.
decommission 2-3 nodes
how can control number of cores used hadoop node
set below config in yarn-site.xml allocate 8 vcores per node
<property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>8</value> </property>
also update yarn.scheduler.capacity.resource-calculator in capacity-scheduler.xml because defaultresourcecalculator uses memory.
<property> <name>yarn.scheduler.capacity.resource-calculator</name> <value>org.apache.hadoop.yarn.util.resource.dominantresourcecalculator</value> <description> resourcecalculator implementation used compare resources in scheduler. default i.e. defaultresourcecalculator uses memory while dominantresourcecalculator uses dominant-resource compare multi-dimensional resources such memory, cpu etc. </description> </property>
Comments
Post a Comment