读书人

Hadoop的装配和部署

发布时间: 2012-11-03 10:57:43 作者: rapoo

Hadoop的安装和部署

系统:RedHat linux enterprise5

Jdk:jdk1.6

Hadoop:Hadoop-0.19.2

节点数目:两台(可自行扩充)

转换问root用户,修改/etc/hosts文件,内容如下:

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1?????? localhost.localdomain?? localhost

::1???? localhost6.localdomain6 localhost6

192.168.0.121?? hwellzen-bj-1.compute?? hwellzen-bj-1

192.168.0.122?? hwellzen-bj-2.compute?? hwellzen-bj-2

?

注意:以上的内容有回车符的时候才进行换行。将两台机器的hosts都设置问以上的内容,然后分别重启系统。

[liuzj@hwellzen-bj-1 ~]$ssh-keygen? -t? rsa

这个命令将为hwellzen-bj-1上的用户liuzj生成其密钥对,询问其保存路径时直接回车采用默认路径,当提示要为生成的密钥输入passphrase的时候,直接回车,也就是将其设定为空密码。生成的密钥对id_rsa,id_rsa.pub,默认存储在/home/liuzj/.ssh目录下。

执行命令:

cp id_rsa.pub authorized_keys

可以实现用户在本地匿名登录,现象为

[liuzj@hwellzen-bj-1 ~]$ssh hwellzen-bj-1

Last login: Tue Aug 11 09:31:49 2009 from 192.168.0.129

[liuzj@hwellzen-bj-1 ~]$

?

?

?

[liuzj@hwellzen-bj-1 ~]$ssh-keygen? -t? rsa

[liuzj@hwellzen-bj-1 ~]$ scp .ssh/id_rsa.pub hwellzen-bj-2:/home/liuzj

[liuzj@hwellzen-bj-2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys

我这里的现象:

[liuzj@hwellzen-bj-1 ~]$ ssh hwellzen-bj-2

Last login: Tue Aug 11 09:31:49 2009 from 192.168.0.129

[liuzj@hwellzen-bj-2 ~]$

Chmod u+x jdk-6u2-linux-i586.bin

./ jdk-6u2-linux-i586.bin

文件安装完成

PATH=/usr/local/java/jdk1.6.0_02/bin:$PATH

JAVA_HOME=/usr/local/java/jdk1.6.0_02

export PATH JAVA_HOME

source? /etc/profile

执行命令:java version

[liuzj@hwellzen-bj-2 ~]$ java -version

java version "1.6.0_02"

Java(TM) SE Runtime Environment (build 1.6.0_02-b05)

Java HotSpot(TM) Client VM (build 1.6.0_02-b05, mixed mode, sharing)

[liuzj@hwellzen-bj-2 ~]$

?????????????????????

将hadoop-0.19.2.tar.gz解压到,用户liuzj下使用的命令为

[liuzj@hwellzen-bj-2 ~]$ tar zxvf hadoop-0.19.2.tar.gz

最终解压后的目录为:

[liuzj@hwellzen-bj-2 hadoop-0.19.2]$ pwd

/home/liuzj/hadoop-0.19.2

注意:保证各台机器安装的目录机构都是相同的。

需要配置的文件:hadoop-env.sh,slaves,masters,hadoop-site.xml

# The java implementation to use.? Required.

?export JAVA_HOME=/usr/local/java/jdk1.6.0_02

hwellzen-bj-2

hwellzen-bj-1

hwellzen-bj-1

<configuration>

<property>

?? <name>fs.default.name</name>

?? <value>hdfs://hwellzen-bj-1:4310/</value>

</property>

<property>

?? <name>mapred.job.tracker</name>//

?? <value>hdfs://hwellzen-bj-1:4311/</value>

</property>

<property>

?? <name>dfs.replication</name>

?? <value>2</value>

</property>

<property>

??? <name>hadoop.tmp.dir</name>

??? <value>/home/liuzj/hadoop-0.19.2/tmp</value>

</property>

<property>

? <name>dfs.name.dir</name>

? <value>/home/liuzj/hadoop-0.19.2/filesystem/name/</value>

</property>

<property>
? <name>dfs.data.dir</name>
? <value>/opt/hadoop-0.19.2/filesystem/data</value>
? <description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.</description>
</property>

<property>

?? <name>mapred.child.java.opts</name>

?? <value>-Xmx512m</value>

</property>

</configuration>

格式化dfs:

???????????????????? [liuzj@hwellzen-bj-1 hadoop-0.19.2]$ bin/hadoop namenode -format

?

?

启动hadoop

[liuzj@hwellzen-bj-1 hadoop-0.19.2]$ bin/start-all.sh

访问

?

http://192.168.0.121:50070/dfshealth.jsp

http://192.168.0.121:50030/jobtracker.jsp

?

注意:保证hadoop的需要用的端口都是可以访问的,本人在配置的时候因为防火墙组织了端口的访问造成hadoop的启动失败,后来关闭的防火墙,启动成功。

读书人网 >软件架构设计

热点推荐