读书人

Nutch相干框架视频教程13

发布时间: 2013-05-02 09:39:29 作者: rapoo

Nutch相关框架视频教程13

第十三讲

优酷在线视频地址?(29分钟)
压缩超清下载地址?

1、改变负载

三台机器,改变负载

host2(NameNode、DataNode、TaskTracker)

host6(SecondaryNameNode、DataNode、TaskTracker)

host8(JobTracker、DataNode、TaskTracker)

?

指定SecondaryNameNodehost6

vi ?conf/masters指定host6

scp ?conf/masters ?host6:/home/hadoop/hadoop-1.1.2/conf/masters

scp ?conf/masters ?host8:/home/hadoop/hadoop-1.1.2/conf/masters

?

vi? conf/hdfs-site.xml

???<property>
???? <name>dfs.http.address</name>
???? <value>host2:50070</value>
???</property>

???<property>

? ?<name>dfs.secondary.http.address</name>

? ?<value>host6:50090</value>

???</property>

scp ?conf/hdfs-site.xml?host6:/home/hadoop/hadoop-1.1.2/conf/hdfs-site.xml

scp ?conf/hdfs-site.xml?host8:/home/hadoop/hadoop-1.1.2/conf/hdfs-site.xml

?

指定JobTrackerhost8

vi? conf/mapred-site.xml

<property>

? <name>mapred.job.tracker</name>

? <value>host8:9001</value>

</property>?????

scp ?conf/mapred-site.xml?host6:/home/hadoop/hadoop-1.1.2/conf/mapred-site.xml

scp ?conf/mapred-site.xml ?host8:/home/hadoop/hadoop-1.1.2/conf/mapred-site.xml

?

vi conf/core-site.xml

<property>

?<name>fs.checkpoint.dir</name>

? <value>/home/hadoop/dfs/filesystem/namesecondary</value>

</property>

scp ?conf/core-site.xml?host6:/home/hadoop/hadoop-1.1.2/conf/core-site.xml

scp ?conf/core-site.xml?host8:/home/hadoop/hadoop-1.1.2/conf/core-site.xml

配置host8

host8上的脚本start-mapred.sh会启动host2和host6上面的TaskTracker,所以需要对host8执行:

ssh-keygen ?-t ?rsa(密码为空,路径默认)

ssh-copy-id ?-i ?.ssh/id_rsa.pub ?hadoop@host2

ssh-copy-id ?-i ?.ssh/id_rsa.pub ?hadoop@host6

ssh-copy-id ?-i ?.ssh/id_rsa.pub ?hadoop@host8

可以在host8上面通过ssh无密码登陆host2和host6

ssh host2

ssh host6

ssh host8

在/home/hadoop/.bashrc 中追加:

export ?PATH=/home/hadoop/hadoop-1.1.2/bin:$PATH

?

host2: 执行start-dfs.sh

host8: 执行start-mapred.sh

?

2、SecondaryNameNode

ssh? host6

停止secondarynamenode

hadoop-1.1.2/bin/hadoop-daemon.sh ?stop? secondarynamenode

强制合并fsimage和eidts

hadoop-1.1.2/bin/hadoop ?secondarynamenode ?-checkpoint ?force

启动secondarynamenode

hadoop-1.1.2/bin/hadoop-daemon.sh ?start ?secondarynamenode

?

3、启用回收站

<property>

? <name>fs.trash.interval</name>

? <value>10080</value>

</property>?

读书人网 >互联网

热点推荐