3

It is related to my previous question

I set Xms as 512M, Xmx as 6G for one java process. I have three such processes.

My total ram is 32 GB. Out of that 2G is always occupied.

I executed free command to ensure that minimum 27G is free. But my jobs required only 18 GB max at any time.

It was running fine. Each job occupied around 4 to 5 GB but used around 3 to 4 GB. I understand that Xmx doesn't mean that process should always occupy 6 GB

When another X process started on the same server with another user, it has occupied 14G. Then one of my process got failed.

I understand that I need to increase ram or manage both collision jobs.

Here the question is that how can I force my job to use 6 GB always and why does it throw GC limit reached error in this case?

I used visualvm to monitor them. And jstat also.

Any advises are welcome.

BotOfWar
  • 450
  • 5
  • 13
Gibbs
  • 16,706
  • 11
  • 53
  • 118
  • 4
    If you want your heap size to be fixed at 6gb, then use `-Xms6g -Xmx6g` – knittl Aug 06 '20 at 18:02
  • Thanks @knitti. Do u have idea about GC limit error in this situation I explained? – Gibbs Aug 06 '20 at 18:52
  • Consider using a real scheduling and management tool to manage and limit memory reservations for each process. Kubernetes may apply here. – nanofarad Aug 06 '20 at 20:11

1 Answers1

3

Simple answer: -Xmx is not a hard limit to JVM. It only limits the heap available to Java inside JVM. Lower your -Xmx and you may stabilize process memory on a size that suits you.

Long answer: JVM is a complex machine. Think of this like an OS for your Java code. The Virtual Machine does need extra memory for its own housekeeping (e.g. GC metadata), memory occupied by threads' stack size, "off-heap" memory (e.g. memory allocated by native code through JNI; buffers) etc.

-Xmx only limits the heap size for objects: the memory that's dealt with directly in your Java code. Everything else is not accounted for by this setting.

There's a newer JVM setting -XX:MaxRam (1, 2) that tries to keep the entire process memory within that limit.

From your other question:

It is multi threading. 100 reader, 100 writer threads. Each one has it's own connection to the database.

Keep in mind that the OS' I/O buffers also need memory for their own function.

If you have over 200 threads, you also pay the price: N*(Stack size), and approx. N*(TLAB size) reserved in Young Gen for each thread (dynamically resizable):

java -Xss1024k -XX:+PrintFlagsFinal 2> /dev/null | grep -i tlab

size_t MinTLABSize    = 2048
intx ThreadStackSize  = 1024

Approximately half a gigabyte just for this (and probably more)!

Thread Stack Size (in Kbytes). (0 means use default stack size) [Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier); Sparc 64 bit: 1024; Linux amd64: 1024 (was 0 in 5.0 and earlier); all others 0.] - Java HotSpot VM Options; Linux x86 JDK source

In short: -Xss (stack size) defaults depend on the VM and OS environment.

Thread Local Allocation Buffers are more intricate and help against allocation contention/resource locking. Explanation of the setting here, for their function: TLAB allocation and TLABs and Heap Parsability.

Further reading: "Native Memory Tracking" and Q: "Java using much more memory than heap size"


why does it throw GC limit reached error in this case.

"GC overhead limit exceeded". In short: each GC cycle reclaimed too little memory and the ergonomics decided to abort. Your process needs more memory.

When another X process started on the same server with another user, it has occupied 14g. Then one of my process got failed.

Another point on running multiple large memory processes back-to-back, consider this:

java -Xms28g -Xmx28g <...>;
# above process finishes
java -Xms28g -Xmx28g <...>; # crashes, cant allocate enough memory

When the first process finishes, your OS needs some time to zero out the memory deallocated by the ending process before it can give these physical memory regions to the second process. This task may need some time and until then you cannot start another "big" process that immediately asks for the full 28GB of heap (observed on WinNT 6.1). This can be worked around with:

  • Reduce -Xms so the allocation happens later in 2nd processes' life-time
  • Reduce overall -Xmx heap
  • Delay the start of the second process
BotOfWar
  • 450
  • 5
  • 13
  • Thanks for detailed answer with more explanations. I am going to try the point three which makes more sense to solve the issue right now `Delay the start of the second process` – Gibbs Aug 07 '20 at 01:21
  • 1
    MaxRAM doesn't seem to provide any guarantees related to max JVM memory consumption - it's only the value used to calculate Max heap size: https://chriswhocodes.com/hotspot_options_jdk11.html?s=MaxRam – Juraj Martinka Aug 07 '20 at 19:50
  • 2
    The resident memory consumed by thread stacks isn't typically the N*ThreadStackSize. The (excellent) SO answer you linked (https://stackoverflow.com/questions/53451103/java-using-much-more-memory-than-heap-size-or-size-correctly-docker-memory-limi/53624438#53624438) provide more details and argues it's typically between 80 and 200 kB per thread stack. – Juraj Martinka Aug 07 '20 at 19:59