在启动/usr/local/spark-1.6.0-bin-hadoop2.6/bin/spark-shell --master yarn-client报错: 2016-02-22 11:59:19,775 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1456113482512_0001_01_000003 and exit code: 1
ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
2016-02-22 11:59:19,779 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch.
开始查找原因,开始发现是内存不足。由于使用虚拟机每台机器内存都不太大,我一个有四台虚拟机。每个分配内存1G,后来将nodemanger机器内存调整到4G。问题依旧。经查看发现没有设置每个container内存大小, (5)yarn.scheduler.minimum-allocation-mb
单个任务可申请的最少物理内存量,默认是1024(MB),如果一个任务申请的物理内存量少于该值,则该对应的值改为这个数。
(6)yarn.scheduler.maximum-allocation-mb
单个任务可申请的最多物理内存量,默认是8192(MB)。
由于节点充当多重角色所以内存还是不足以支持多个container为1G的情况,所以将该值设值为512M,但是依旧无法启动container。
在yarn-site.xml中添加日志路径:
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/home/q/tmp/yarn/log</value>
</property>
发现有一下错误信息
16/02/22 13:33:37 INFO util.Utils: Successfully started service 'sparkExecutorActorSystem' on port 50898.
Exception in thread "main" java.lang.IllegalArgumentException: System memory 257949696 must be at least 4.718592E8. Please use a larger heap size.
at
org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:193)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:175)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:354) at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:217) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:186)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:151)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:253)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
原来spark中需要为操作系统预留450M的可用空间,当内存不足450m时再申请container时就会报错。可用修改spark参数
spark.testing.reservedMemory=104857600
将预留内存变小这样就可以跑起来了。
查看内存使用情况
ps -e -o 'pid,comm,args,pcpu,rsz,vsz,stime,user,uid'|sort -nrk5
4385 java /usr/local/java/jdk1.7.0_79 1.2 456524 1637768 08:46 root 0 22359 java /usr/local/java/jdk1.7.0_79 4.5 360408 1096260 14:07 root 0
5882 java /usr/local/java/jdk1.7.0_79 0.3 284668 1633888 09:10 root 0 20388 java /usr/local/java/jdk1.7.0_79 1.6 273476 2794228 11:58 root 0
22283 java /usr/local/java/jdk1.7.0_79 1.3 259120 1314480 14:07 root 0
4129 java /usr/local/java/jdk1.7.0_79 0.3 237588 1609528 08:30 root 0 5822 java /usr/local/java/jdk1.7.0_79 0.2 171196 1603124 09:09 root 0
总结,在虚拟机情况下由于hadoop需要datanonde,nodemanager等多个进程在同一个虚拟机中,而且虚拟机本身内存又比较小会有无法启动container的问题。实际在应用过程中都会使用实体机内存是有保证的,不用担心这个问题。不过还是要考虑适当设置内存参数以保证资源的合理利用。
第二篇:jetty嵌入启动总结
1、 首先要使用jetty-server-6.1.26.jar(jetty-server jar包)。
2、/**
*Jetty嵌入式启动总结
**/
public class TestStart(){
Server server=new Server();//首先建立一个Server服务器 Connector connector=new SelectChannelConnector();//建立一个连接,可以connector.setPort(8081);//添加一个端口,假设为8081 server.setConnectors(new Connector[]{connector});//想server服务器中WebAppContext context=new WebAppContext();//建立一个webApp容器 context.setContextPath("/");设置ContextPath的值,你在浏览器中输入包括Ip和端口 添加一个端口号为8081的连接,没有设置IP的默认为localhost(127.0.0.1) htpp://localhost:8081/即可
context.setDescriptor("E:/program/Test/test/WEB-INF/web.xml");//web.xml的配置文件或者你的其它脚本的描述文件
context.setWar("....");//war包所在的路径 context.setResourceBase("/E:/program/Test/test");//项目的路径,到WEB_INF 的上一级
context.setParentLoaderPriority(true);//可以加载上一级目录
//
context.setInitParams(Collections.singletonMap("org.mortbay.jetty.servlet.Default.useFileMappedBuffer", "false"));
server.addHandler(context);
/*MBeanServer
} mBeanServer = ManagementFactory.getPlatformMBeanServer(); MBeanContainer mBeanContainer = new MBeanContainer(mBeanServer); server.getContainer().addEventListener(mBeanContainer); mBeanContainer.start();*/ try { } server.start(); server.join(); e.printStackTrace(); System.exit(100); } catch (Exception e) {