51Testing软件测试论坛

标题: Hadoop安装 [打印本页]

作者: 测试积点老人    时间: 2018-12-27 15:04
标题: Hadoop安装
  1. <configuration>
  2.     <property>
  3.         <name>fs.defaultFS</name>
  4.         <value>hdfs://172.17.0.2:9000</value>
  5.     </property>
  6.     <property>
  7.         <name>hadoop.tmp.dir</name>
  8.         <value>file:/home/hadoop/tmp</value>
  9.     </property>
  10.     <property>
  11.         <name>io.file.buffer.size</name>
  12.         <value>131702</value>
  13.     </property>
  14. </configuration>
复制代码
配置/home/hadoop/hadoop-3.0.0-alpha4/etc/hadoop目录下的hdfs-site.xml
  1. <configuration>
  2.     <property>
  3.         <name>dfs.namenode.name.dir</name>
  4.         <value>file:/home/hadoop/dfs/name</value>
  5.     </property>
  6.     <property>
  7.         <name>dfs.datanode.data.dir</name>
  8.         <value>file:/home/hadoop/dfs/data</value>
  9.     </property>
  10.     <property>
  11.         <name>dfs.replication</name>
  12.         <value>2</value>
  13.     </property>
  14.     <property>
  15.         <name>dfs.namenode.secondary.http-address</name>
  16.         <value>172.17.0.2:9001</value>
  17.     </property>
  18.     <property>
  19.     <name>dfs.webhdfs.enabled</name>
  20.     <value>true</value>
  21.     </property>
  22. </configuration>
复制代码
配置/home/hadoop/hadoop-3.0.0-alpha4/etc/hadoop目录下的mapred-site.xml
  1. <configuration>
  2.     <property>
  3.         <name>mapreduce.framework.name</name>
  4.         <value>yarn</value>
  5.     </property>
  6.     <property>
  7.         <name>mapreduce.jobhistory.address</name>
  8.         <value>172.17.0.2:10020</value>
  9.     </property>
  10.     <property>
  11.         <name>mapreduce.jobhistory.webapp.address</name>
  12.         <value>172.17.0.2:19888</value>
  13.     </property>
  14. </configuration>
复制代码
配置/home/hadoop/hadoop-3.0.0-alpha4/etc/hadoop目录下的yarn-site.xml
  1. <configuration>
  2.     <property>
  3.         <name>yarn.nodemanager.aux-services</name>
  4.         <value>mapreduce_shuffle</value>
  5.     </property>
  6.     <property>
  7.         <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
  8.         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  9.     </property>
  10.     <property>
  11.         <name>yarn.resourcemanager.address</name>
  12.         <value>172.17.0.2:8032</value>
  13.     </property>
  14.     <property>
  15.         <name>yarn.resourcemanager.scheduler.address</name>
  16.         <value>172.17.0.2:8030</value>
  17.     </property>
  18.     <property>
  19.         <name>yarn.resourcemanager.resource-tracker.address</name>
  20.         <value>172.17.0.2:8031</value>
  21.     </property>
  22.     <property>
  23.         <name>yarn.resourcemanager.admin.address</name>
  24.         <value>172.17.0.2:8033</value>
  25.     </property>
  26.     <property>
  27.         <name>yarn.resourcemanager.webapp.address</name>
  28.         <value>172.17.0.2:8088</value>
  29.     </property>
  30.     <property>
  31.         <name>yarn.nodemanager.resource.memory-mb</name>
  32.         <value>768</value>
  33.     </property>
  34. </configuration>
复制代码
配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下hadoop-env.sh、yarn-env.sh的JAVA_HOME,不设置的话,启动不了,
  1. export JAVA_HOME=/home/java/jdk1.7.0_79
复制代码
配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的workers(注意,老版本中这里似乎 是slaves),删除默认的localhost,增加2个从节点:
  1. 172.17.0.3
  2. 172.17.0.4
复制代码
配置用户,在start-dfs.sh与stop-dfs.sh文件中增加:
  1. HDFS_DATANODE_USER=root
  2. HADOOP_SECURE_DN_USER=hdfs
  3. HDFS_NAMENODE_USER=root
  4. HDFS_SECONDARYNAMENODE_USER=root
复制代码
如果不加如上配置,则会出现如下错误:
  1. [root@deb3b84de619 hadoop-3.0.0-alpha4]# sbin/start-all.sh
  2. Starting namenodes on [localhost]
  3. ERROR: Attempting to launch hdfs namenode as root
  4. ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting launch.
  5. Starting datanodes
  6. ERROR: Attempting to launch hdfs datanode as root
  7. ERROR: but there is no HDFS_DATANODE_USER defined. Aborting launch.
  8. Starting secondary namenodes [VM_128_191_centos]
  9. ERROR: Attempting to launch hdfs secondarynamenode as root
  10. ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting launch.
复制代码







欢迎光临 51Testing软件测试论坛 (http://bbs.51testing.com/) Powered by Discuz! X3.2