读书人

hadoop 内存异常

发布时间: 2012-06-30 17:20:12 作者: rapoo

hadoop 内存错误

hadoop 内存给定错误

?

11/09/06 09:20:25 WARN mapred.JobClient: Error reading task outputhttp://server4:50060/tasklog?plaintext=true&taskid=attempt_201109060853_0005_r_000008_0&filter=stdout
11/09/06 09:20:25 WARN mapred.JobClient: Error reading task outputhttp://server4:50060/tasklog?plaintext=true&taskid=attempt_201109060853_0005_r_000008_0&filter=stderr
11/09/06 09:20:34 INFO mapred.JobClient: Task Id : attempt_201109060853_0005_m_000009_1, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
??????? at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

?

?

运行map reduce任务的时候 报这个错误,查了些文章 说是 应该吧 userlogs 下面的文件都删除掉

老外的文章:

Just an FYI, found the solution to this problem.

Apparently, it's an OS limit on the number of sub-directories that can be reated in another directory.??In this case, we had 31998 sub-directories uder hadoop/userlogs/, so any new tasks would fail in Job Setup.

From the unix command line, mkdir fails as well:-Xss 20000k

这个参数的意思是 每增加一个线程 jvm 会增加 20M 的内存,而最佳值应该是128K,默认值好像是512k.

-Xmx? jvm 启动最大内存,Java?Heap最大值,默认值为物理内存的1/4,最佳设值应该视物理内存大小及计算机内其他内存开销而定

-Xms? jvm? Java?Heap初始值,Server端JVM最好将-Xms和-Xmx设为相同值,开发测试机JVM可以保留默认值;

-Xmn????Java?Heap?Young区大小,不熟悉最好保留默认值;

-Xss????每个线程的Stack大小,不熟悉最好保留默认值;

?

读书人网 >开源软件

热点推荐