Hadoop yarn集群安装

作者: wencst 分类: 架构设计 发布时间: 2018-12-17 11:17 阅读: 651 次

所有节点操作:

wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz
设置ip地址,hosts,ssh免密登录,scp,sudo,关闭防火墙,yum,ntp时间同步 略。
Java安装 略。
参考:http://www.cnblogs.com/pojishou/p/6267616.html
所有节点基础修改:
yum install -y expect
yum install -y telnet
修改所有节点/etc/hosts
10.8.5.180 hadoop1
10.8.5.181 hadoop2
10.8.5.182 hadoop3
创建hadoop用户
groupadd hadoop
useradd hadoop -g hadoop
passwd hadoop

主节点操作

创建无密码登录秘钥
auto-key.sh
#!/bin/bash
PASSWORD=hadoop123
auto_ssh_copy_id() {
expect -c “set timeout -1;
spawn ssh-copy-id $1;
expect {
*(yes/no)* {send — yes\r;exp_continue;}
*assword:* {send — $2\r;exp_continue;}
eof {exit 0;}
}”;
}
cat nodes | while read host
do
{
auto_ssh_copy_id $host $PASSWORD
}&wait
done


exec.sh

#!/bin/bash
cat nodes | while read host
do
{
ssh $host $1
}&wait
done
scp.sh
#!/bin/bash
cat nodes | while read host
do
{
scp -r $1 $host:$2
}&wait
done
nodes
hadoop1
hadoop2
hadoop3
./auto-key.sh
主节点java安装:
./exec.sh “tar -zxvf jdk-9.0.4_linux-x64_bin.tar.gz -C /home/hadoop/”
./exec.sh “echo ‘export JAVA_HOME=/home/hadoop/jdk-9.0.4’ >> /etc/profile”
./exec.sh “echo ‘export PATH=\$JAVA_HOME/bin:\$PATH’ >> /etc/profile”
./exec.sh “source /etc/profile”
统一修改file文件把‘abc’换为‘xxx’
./exec.sh “sed -i ‘s/abc/xxx/g’ file”
主节点hadoop安装:
#使用hadoop用户执行以下命令(以下命令
su hadoop
#解压
tar -zxvf hadoop-3.1.1/hadoop-3.1.1.tar.gz -C /home/hadoop/
#增加JAVA_HOME配置
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/hadoop/jdk-9.0.4
#修改core-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-3.1.1/tmp</value>
</property>
</configuration>
#修改hdfs-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop-3.1.1/data/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop-3.1.1/data/data</value>
</property>

<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<property>
<name>dfs.secondary.http.address</name>
<value>hadoop1:50090</value>
</property>
</configuration>

#修改mapred-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
#修改yarn-site.xml
vi /home/hadoop/hadoop-3.1.1/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

#修改slaves/workers
hadoop2
hadoop3
#初始化hdfs
/home/hadoop/hadoop-3.1.1/bin/hadoop namenode -format
#启动集群
/home/hadoop/hadoop-3.1.1/sbin/start-all.sh
#测试
/home/hadoop/hadoop-3.1.1/bin/hadoop jar /home/hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar pi 5 10

如果文章对您有用,扫一下支付宝的红包,不胜感激!

欢迎加入QQ群进行技术交流:656897351(各种技术、招聘、兼职、培训欢迎加入)



Leave a Reply