4

Using VisualVM and checking Tomcat 8.5 catalina.out log I see that almost every time (7 out of 11 times or so) when full GC happens logs show OutOfMemory (at the exact same minute).

With Tomcat parameters that have something to do with memory management: -Xms3G -Xmx=6G -XX:+UseG1GC -XX:+UseStringDeduplication -XX:MaxHeapFreeRatio=100

At first I thought that it was because of default -XX:MaxHeapFreeRatio value which is 70 since I saw that max. heap size (and used heap of course) would drop significantly during full GC - to ~10-20%. However adding XX:MaxHeapFreeRatio=100 did not fix it.

Although this is memory usage graph with different set of JVM parameters (can not get the one with old JVM parameters ATM) it is similar in a way after full GC memory usage grows rapidly, same max. heap size and that max. heap size does not drop.

enter image description here

Any ideas why this could happen?

Update: I forgot to mention that previously full GC and OutOfMemory would happen when heap size was not even full - ~5GB. Back then not a single time did I see heap reach 6GB.

user435421
  • 621
  • 1
  • 8
  • 27
  • You should tune those JVM GC parameters: http://blog.sokolenko.me/2014/11/javavm-options-production.html – duffymo Sep 07 '17 at 14:37
  • Before asking questions like this, do some profiling, for example you can use YourKit. In your case it might happen due to GC not being able keep up with too many objects being created and released in short amount of time. – tsolakp Sep 07 '17 at 15:36

3 Answers3

1

Obviously some of the objects created cannot be garbage collected properly. You can try to use sampler function of VisualVM and track the number of instances created.

Alex
  • 11
  • 1
  • You are right and that is what OOM means but how OOM can happen when there is around 1GB free space left (see question update) and at the same time when full GC happens. – user435421 Sep 07 '17 at 14:57
  • This might have several reasons. Like if permgen space is full, or there is no available continuous memory block for a large array or object creation, or if the time spent for GC is too long, array size is over allowed jvm limits, etc. – Alex Sep 07 '17 at 15:10
0

Try to cache IO-Operations with MapDB.

You can do like this to cache it to an disk-based file database:

import java.io.File;
import java.io.IOException;
import java.util.Map;
import org.mapdb.DB;
import org.mapdb.DBMaker;

/**
 * Singleton class.
 */
public class DBManager 
{
        /**
     * Variables.
     */
    private static File dbFile = new File("path/to/file");
    private DB db;
    private static final String password = "yourPassword";
    private Map<Integer, String> ctDB;
    private static DBManager manager;

   /**
    * Singleton operations.
    */

  /**
   * Static initializer.
   */
  static
  {
     manager = null;
  }

/**
 * Singleton method @see DBManager.getInstance(File dbFile);
 * @return          ->  An object / instance of this class.
 */
public static DBManager getInstance()
{       
    if(isFileDatabaseOK())
    {
        /**
         * Check if an object/instance from this class exists already. 
         */
        if(manager == null)
        {
            manager = new DBManager();
        }

        /**
         * Return an object/instance of this class.
         */
        return manager;
    }
    else
    {
        return null;
    }
}

/**
 * Constructors.
 */

/**
 * Empty default Constructor starts the MapDB instance.
 */
private DBManager() 
{       
    /**
     * Load the database file from the given path
     * and initialize the database.
     */
    initMapDB();
}

/**
 * MapDB initializer.
 */

/**
 * Initialize a MapDB database.
 */
private void initMapDB() 
{
    /**
     * Persistence: Make MapDB able to load the same database from the 
     * file after JVM-Shutdown. Initialize database without @see     org.mapdb.DBMaker.deleteFilesAfterClose()
     * @see <link>https://groups.google.com/forum/#!topic/mapdb/AW8Ax49TLUc</link>
     */
    db = DBMaker.newFileDB(dbFile)
            .closeOnJvmShutdown()       
            .asyncWriteDisable()
            .encryptionEnable(password.getBytes())
            .make();

    /**
     * Create a Map / Get the existing map.
     */
    ctDB = db.getTreeMap("database");
}

/**
 * File existence check.
 * If file doesn't exists -> Create a new db file and inform the user.
 */
private static boolean isFileDatabaseOK() 
{       
    /**
     * If the file doesn't exists (First run) create a new file and 
     * inform the user.
     */
    if(!dbFile.exists())
    {
        try 
        {
            dbFile.getParentFile().mkdirs();
            dbFile.createNewFile();

            /**
             * TODO 
             * System.out.println("Database not found. Creating a new one.");
             */

            return true;
        }
        catch (IOException e)
        {
            /**
             * TODO Error handling
             */
            e.printStackTrace();
            return false;
        }
    }
    else
    {           
        return true;
    }
}

/**
 * Database methods / operations.
 */

/**
 * Get objects by id.
 * @param id    ->  Search parameter.
 * @return      ->  The object that belongs to the id.
 */
public String get(int id) 
{
    return ctDB.get(id);
}

/**
 * Adding objects to the database.
 * @param id -> The key reference to the object as 'id'.
 * @param object -> The object to cache.
 */
public void put(int id, String object)
{
    ctDB.put(id, object);

    db.commit();
}
}

And then do:

 DBManager manager = DBManager.getInstance();
 manager.put(1, "test");
 Sytem.out.println(manger.get(1));
MHDx
  • 51
  • 5
0

G1GC works well if you set default values for most of the parameters. Set only key parameters

-XX:MaxGCPauseMillis
-XX:G1HeapRegionSize
-XX:ParallelGCThreads
-XX:ConcGCThreads

and leave everything else to Java.

You can find more details at below posts:

Java 7 (JDK 7) garbage collection and documentation on G1

Why do I get OutOfMemory when 20% of the heap is still free?

Use some memory analyzer tool like mat to know the root cause.

In your case, it's evident that oldgen is growing. Check for possible memory leaks. If you did not find memory leaks, increase heap memory further.

Ravindra babu
  • 42,401
  • 8
  • 208
  • 194