4

Maven fails with scarce 'Process exited with code 137' in logs.
What could be possible reasons of the critical error and what could be the fix?

Mike
  • 17,033
  • 22
  • 85
  • 113
  • I don't think that's a valid reason for creating a new question. The community will decide whether your answer is worth votes on the other question. – Adam Burley Feb 23 '18 at 18:47
  • are you sure that question has any relation to this one except the header of the question that somebody have recently changed? – Mike Feb 23 '18 at 21:07
  • Another issue with this question is that it does not clearly identify what command is being run with maven or provide any other details. – kldavis4 Feb 23 '18 at 21:13
  • you do not need any other details that will differ on different environments, on different machines and in different situation. should you care about what version of TC I'm running my builds?.. What is important is clear snippet of text that you copy from your console and put to google to find solution for. – Mike Feb 23 '18 at 21:17
  • most people will find a solution to their problem in less than 20 seconds – Mike Feb 23 '18 at 21:18
  • That other question has no edits, so I'm not sure what you are referring to. – Adam Burley Feb 24 '18 at 13:39

2 Answers2

9

Process was killed by Linux OOM Killer because you are low on resources on the machine.

Give machine more memory and/or swap or reduce memory footprint of your process which is directly impacted by jvm default Xmx, which most probably is far from what jvm actually need.

Give it additional java command line options

-Xmx256m -XX:MaxPermSize=512m

or configure system variable

MAVEN_OPTS=-Xmx256m -XX:MaxPermSize=512m

MaxPermSize have no use for java 8+

Mike
  • 17,033
  • 22
  • 85
  • 113
  • Really you should have posted this as an answer on the duplicate question rather than creating a new question. Someone commented on the other answer already that: "I don't think that the Xmx option helps, since then the JVM would notice the lack of memory and exit with a different error. When the kernel kills a process there must be a real lack of memory." What this means is, if there is enough "real memory" available to assign more memory to the JVM, then the JVM process would not have crashed at the OS-level, it would just have thrown an `OutOfMemoryError` or similar. – Adam Burley Feb 23 '18 at 18:43
  • original question was asked 6 years ago. and a year ago it looked like somebody cut half of build log and started a SO thread with question like 'I have some build problem here, could you please help'. I spent half an hour to understand if original question has any relation to my problem. Then I didn't find any answers that may be anyhow related to my problem. So I found solution to my problem in other place and put concrete problem description and simplest clean working solution to the problem. – Mike Feb 23 '18 at 21:02
  • what is re reason of down-vote? answer is wrong, not working? – Mike Feb 23 '18 at 21:04
  • Linux guys [say](https://serverfault.com/questions/571319/how-is-kernel-oom-score-calculated) that 'total memory used' is not used by OOM Killer but oom_score instead. You can spend another week of your life to investigate how OOM Killer is [actually working](http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html), but this exact answer have fixed my problem, and at least 3 other people (I don't understand up-votes otherwise). – Mike Feb 23 '18 at 23:16
  • So my question stays: what is the reason of down-vote? answer is wrong, not working? – Mike Feb 23 '18 at 23:18
  • The answer is wrong, and also should not have been posted here, should have been posted as an answer on the other question which is clearly the same. And if it has upvotes, possibly those people have not seen the other question or tried the solution. Not sure I understand your reference to OOM Killer, as the process was killed by the kernel, not Java. Also, by default Java assigns 25% of system memory to the JVM max heap, so your system has less than 1GB of total memory which is a problem in itself. If you want further feedback, you should post this answer on the original question. – Adam Burley Feb 24 '18 at 13:44
  • Java killer, what? When you said 25% have you talked about client or server JVM, are you sure it is that simple? Could it be that Xmx256m actually reduces OS memory footprint? – Mike Feb 24 '18 at 22:51
  • Check this answer, which quotes from the official Oracle spec: https://stackoverflow.com/a/4667635/191761 – Adam Burley Feb 26 '18 at 15:18
  • Already done, along with official documentation. This is what I was talking about, it is not as simple as 25% – Mike Feb 26 '18 at 15:53
4

Just to note, I eventually found that the problem was too much memory being allocated, not too little.

When running the job on a machine with 2GB of memory and setting the max heap size to 2GB, the build eventually failed with status 137. However when setting the job to 1GB maximum (e.g. -Xmx1g -Xms512m), the build succeeded.

This kind of makes sense, because the JVM will freely increase its memory up to the maximum heap size, but if there's not enough real memory, the OS will kill the process. However, if you reduce the max heap size, the JVM won't try to increase its memory so highly, so the OS won't worry about it enough to kill it.

Furthermore, I was using GWT which forks a separate process for compilation, and the arguments have to be specified as an extraJvmArgs element within gwt-maven-plugin configuration, not in MAVEN_OPTS.

Adam Burley
  • 4,231
  • 2
  • 40
  • 62
  • You were talking about too little memory not too much memory. And yes I posted this answer in both this thread and the other thread so it is easy to find because people might find this thread or the other thread through Google searches. but I still think this thread should not exist because it's a duplicate of the other one. – Adam Burley Feb 28 '18 at 11:24
  • My answer has no notion about neither little memory nor too much memory. My answer is talking about solution to overcome the problem. Of cource there are deep reasons for that solution to work which my answer shed no details. If you are so 'correct' about how and where to post an answer, you should be posting you answer in one place and put a link to other place. Secondly you are actually proposing the same solution, so this should not be a new answer but a comment. Thirdly, my solution is actually working, right? – Mike Feb 28 '18 at 11:40
  • Firstly, yes your answer did share the details. You have just edited your answer but before it said "Most probably maven has insufficient memory settings". Secondly, in light of that, your solution was to increase the maximum heap, which is different from my answer which is to decrease it. Thirdly, your solution may work in some cases where the maximum heap is by default being set higher than your value, but that doesn't make it correct. Fourthly, it's bad practice on SO to just post a link to somewhere as an answer and not elaborate what it says. – Adam Burley Feb 28 '18 at 12:37
  • I intentionally didn't edit my answer for some time untill discussion ends knowing many details, not you to say answer was edited. But history is available, and I can't find there anything about 'increase the maximum heap'. Moreover 'maven has insufficient memory settings' can be read as 'insufficient memory' or 'insufficient settings'. So at worst, you can blame me in not clear wording which has nothing to do with the actual fix. – Mike Feb 28 '18 at 13:53
  • Thanks so much, your solution finally pointed me in the right direction! I had the problem after switching from Java 7 to 8. Slightly reducing Xmx helped so that the build doesn't crash anymore. – winne2 Mar 20 '19 at 17:09