Hadoop yarn集群安装
所有节点操作:
设置ip地址,hosts,ssh免密登录,scp,sudo,关闭防火墙,yum,ntp时间同步 略。
Java安装 略。
所有节点基础修改:
yum install -y expect
yum install -y telnet
修改所有节点/etc/hosts
10.8.5.180 hadoop1
10.8.5.181 hadoop2
10.8.5.182 hadoop3
创建hadoop用户
groupadd hadoop
useradd hadoop -g hadoop
passwd hadoop
主节点操作
创建无密码登录秘钥
auto-key.sh
#!/bin/bashPASSWORD=hadoop123auto_ssh_copy_id() {expect -c “set timeout -1;spawn ssh-copy-id $1;expect {*(yes/no)* {send — yes\r;exp_continue;}*assword:* {send — $2\r;exp_continue;}eof {exit 0;}}”;}cat nodes | while read hostdo{auto_ssh_copy_id $host $PASSWORD}&waitdone
exec.sh
#!/bin/bashcat nodes | while read hostdo{ssh $host $1}&waitdone
scp.sh
#!/bin/bashcat nodes | while read hostdo{scp -r $1 $host:$2}&waitdone
nodes
hadoop1hadoop2hadoop3
./auto-key.sh
主节点java安装:
./exec.sh “tar -zxvf jdk-9.0.4_linux-x64_bin.tar.gz -C /home/hadoop/”./exec.sh “echo ‘export JAVA_HOME=/home/hadoop/jdk-9.0.4’ >> /etc/profile”./exec.sh “echo ‘export PATH=\$JAVA_HOME/bin:\$PATH’ >> /etc/profile”./exec.sh “source /etc/profile”
统一修改file文件把‘abc’换为‘xxx’
./exec.sh “sed -i ‘s/abc/xxx/g’ file”
主节点hadoop安装:
#使用hadoop用户执行以下命令(以下命令
su hadoop
#解压
tar -zxvf hadoop-3.1.1/hadoop-3.1.1.tar.gz -C /home/hadoop/
#增加JAVA_HOME配置
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/hadoop-env.shexport JAVA_HOME=/home/hadoop/jdk-9.0.4
#修改core-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-3.1.1/tmp</value>
</property>
</configuration>
#修改hdfs-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop-3.1.1/data/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop-3.1.1/data/data</value>
</property><property>
<name>dfs.replication</name>
<value>3</value>
</property><property>
<name>dfs.secondary.http.address</name>
<value>hadoop1:50090</value>
</property>
</configuration>
#修改mapred-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
#修改yarn-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
</property><property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
#修改slaves/workers
hadoop2
hadoop3
#初始化hdfs
/home/hadoop/hadoop-3.1.1/bin/hadoop namenode -format
#启动集群
/home/hadoop/hadoop-3.1.1/sbin/start-all.sh
#测试
/home/hadoop/hadoop-3.1.1/bin/hadoop jar /home/hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar pi 5 10