简单的裸机 迅速搭建一个standalone的flink

mac2025-09-28  11

首先把预先搞好的裸机clone一下(没有裸机可以重新搞)然后第一部打开以后先

vi /etc/sysconfig/network-scripts/ifcfg-eth0

配置主机名(重启生效)

[root@CentOS ~]# vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=CentOS

设置IP映射

[root@CentOS ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.40.110 CentOS

JDK1.8+安装完成

[root@CentOS ~]# rpm -ivh jdk-8u171-linux-x64.rpm [root@CentOS ~]# ls -l /usr/java/ total 4 lrwxrwxrwx. 1 root root 16 Mar 26 00:56 default -> /usr/java/latest drwxr-xr-x. 9 root root 4096 Mar 26 00:56 jdk1.8.0_171-amd64 lrwxrwxrwx. 1 root root 28 Mar 26 00:56 latest -> /usr/java/jdk1.8.0_171-amd64 [root@CentOS ~]# vi .bashrc JAVA_HOME=/usr/java/latest PATH= P A T H : PATH: PATH:JAVA_HOME/bin CLASSPATH=. export JAVA_HOME export PATH export CLASSPATH [root@CentOS ~]# source ~/.bashrc

HDFS正常启动(SSH免密认证)

[root@CentOS ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory ‘/root/.ssh’. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 4b:29:93:1c:7f:06:93:67:fc:c5:ed:27:9b:83:26:c0 root@CentOS The key’s randomart image is: ±-[ RSA 2048]----+ | | | o . . | | . + + o .| | . = * . . . | | = E o . . o| | + = . +.| | . . o + | | o . | | | ±----------------+ [root@CentOS ~]# ssh-copy-id CentOS The authenticity of host ‘centos (192.168.40.128)’ can’t be established. RSA key fingerprint is 3f:86:41:46:f2:05:33:31:5d:b6:11:45:9c:64:12:8e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘centos,192.168.40.128’ (RSA) to the list of known hosts. root@centos’s password: Now try logging into the machine, with “ssh ‘CentOS’”, and check in: .ssh/authorized_keys to make sure we haven’t added extra keys that you weren’t expecting. [root@CentOS ~]# ssh root@CentOS Last login: Tue Mar 26 01:03:52 2019 from 192.168.40.1 [root@CentOS ~]# exit logout Connection to CentOS closed.

配置HDFS|YARN 将hadoop-2.9.2.tar.gz解压到系统的/usr目录下然后配置[core|hdfs|yarn|mapred]-site.xml配置文件。 [root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml <!--nn访问入口--> <property> <name>fs.defaultFS</name> <value>hdfs://CentOS:9000</value> </property> <!--hdfs工作基础目录--> <property> <name>hadoop.tmp.dir</name> <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value> </property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml

<!--block副本因子--> <property> <name>dfs.replication</name> <value>1</value> </property> <!--配置Sencondary namenode所在物理主机--> <property> <name>dfs.namenode.secondary.http-address</name> <value>CentOS:50090</value> </property> <!--设置datanode最大文件操作数--> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <!--设置datanode并行处理能力--> <property> <name>dfs.datanode.handler.count</name> <value>6</value> </property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml

<!--配置MapReduce计算框架的核心实现Shuffle-洗牌--> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!--配置资源管理器所在的目标主机--> <property> <name>yarn.resourcemanager.hostname</name> <value>CentOS</value> </property> <!--关闭物理内存检查--> <property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value> </property> <!--关闭虚拟内存检查--> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml

<!--MapRedcue框架资源管理器的实现--> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> 配置hadoop环境变量 [root@CentOS ~]# vi .bashrc HADOOP_HOME=/usr/hadoop-2.9.2 JAVA_HOME=/usr/java/latest CLASSPATH=. PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export JAVA_HOME export CLASSPATH export PATH export M2_HOME export MAVEN_OPTS export HADOOP_HOME [root@CentOS ~]# source .bashrc 启动Hadoop服务 [root@CentOS ~]# hdfs namenode -format # 创建初始化所需的fsimage文件 [root@CentOS ~]# start-dfs.sh [root@CentOS ~]# start-yarn.sh Flink的安装 上传并解压flink [root@centos ~]# tar -zxf flink-1.8.1-bin-scala_2.11.tgz -C /usr/ 配置flink-conf.yaml [root@centos ~]# vi /usr/flink-1.8.1/conf/flink-conf.yaml jobmanager.rpc.address: centos taskmanager.numberOfTaskSlots: 4 parallelism.default: 3 配置slaves [root@centos ~]# vi /usr/flink-1.8.1/conf/slaves centos 启动Flink [root@centos flink-1.8.1]# ./bin/start-cluster.sh Starting cluster. Starting standalonesession daemon on host centos. Starting taskexecutor daemon on host centos. [root@centos flink-1.8.1]# jps 2912 Jps 2841 TaskManagerRunner 2397 StandaloneSessionClusterEntrypoint
最新回复(0)