到此为止,基本的实验环境已经搭建完毕
到此为止,基本的单机版搭建已经完毕,接下来实现伪分布式的搭建
1.做本机的免密,因为此时的伪分布式也是在一台节点上实现的
[hadoop@server1 hadoop]$ ssh-keygen2.此时的workers文件里面既可以写localhost,也可以写ip地址 为了后续实验方便,在这里写ip地址 3.设置slave节点为本机
[hadoop@server1 hadoop]$ vim hdfs-site.xml4.设置副本个数为1,因为此时只有本机一个节点开启datanode进程 5.设置master节点也为本机
[hadoop@server1 hadoop]$ vim core-site.xml6.初始化
[hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ cd .. [hadoop@server1 etc]$ cd .. [hadoop@server1 hadoop]$ ls bin include lib LICENSE.txt output sbin etc input libexec NOTICE.txt README.txt share [hadoop@server1 hadoop]$ bin/hdfs namenode -format7.可以发现,初始化之后会在/tmp这个目录下面生成一些临时目录以及进程的pid文件 8.开启服务
[hadoop@server1 hadoop]$ sbin/start-dfs.sh9.此时datanode和namenode进程均开启在本节点上
[hadoop@server1 hadoop]$ jps10.查看服务端口的开启情况
[hadoop@server1 hadoop]$ netstat -antlupe11.在真机上做好解析之后进行测试 在浏览器里面可以看到图形化界面 12.查看一些主机的信息,在线还是不在线
[hadoop@server1 hadoop]$ bin/hdfs dfsadmin -report13.建立数据目录,上床数据
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -ls [hadoop@server1 hadoop]$ bin/hdfs dfs -put input [hadoop@server1 hadoop]$ bin/hdfs dfs -ls [hadoop@server1 hadoop]$ bin/hdfs dfs -ls input/14.在浏览器里面可以看到刚刚上传上去的文件 15.在图形化界面里面没有直接删除文件的权限
[hadoop@server1 hadoop]$ rm -fr input/ output/ [hadoop@server1 hadoop]$ ls bin include libexec logs README.txt share etc lib LICENSE.txt NOTICE.txt sbin [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar grep input output 'dfs[a-z.]+' [hadoop@server1 hadoop]$ ls bin include libexec logs README.txt share etc lib LICENSE.txt NOTICE.txt sbin [hadoop@server1 hadoop]$ bin/hdfs dfs -ls Found 2 items drwxr-xr-x - hadoop supergroup 0 2019-10-30 13:55 input drwxr-xr-x - hadoop supergroup 0 2019-10-30 14:11 output [hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/* 1 dfsadmin [hadoop@server1 hadoop]$ bin/hdfs dfs -get output [hadoop@server1 hadoop]$ ls bin include libexec logs output sbin etc lib LICENSE.txt NOTICE.txt README.txt share [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS [hadoop@server1 output]$ cat * 1 dfsadmin [hadoop@server1 output]$ cd .. [hadoop@server1 hadoop]$ rm -fr output/ [hadoop@server1 hadoop]$ ls bin include libexec logs README.txt share etc lib LICENSE.txt NOTICE.txt sbin [hadoop@server1 hadoop]$16.浏览器查看