1

I'm using DL4J in my Android project to run a CNN network as classifier. The problem is that this CNN model occupies more memories than the allowed per application heap size of my smartphone, which leads to out of memory error. So, I'm wondering if there is a way to explicitly free the memories that this native DL4J code allocates.

android profiler

enter image description here

My inputs are 200 image patches in total. I need to stack them together so the processing time is faster. I tried to set the batch size to 32 so each input INDARRAY is size [32, 3, 44, 44]. I also tried 16, 8, etc.. But the only time that I don't get out of memory error is when I input the image patch one at a time. But I can't afford to have the long processing time that it leads to.

I tried to free the memory explicitly with GC, but it didn't work, which makes sense because the memory were taken by native code.

for (int i = 0; i < N / UtilsCustom.NN_batch_size; i++) {
                                INDArray temp = UtilsCustom.overallArray.get(NDArrayIndex.interval(i * UtilsCustom.NN_batch_size, i * UtilsCustom.NN_batch_size + UtilsCustom.NN_batch_size), NDArrayIndex.all(), NDArrayIndex.all(), NDArrayIndex.all());
                                NN_classify(temp);
                                a = i * UtilsCustom.NN_batch_size + UtilsCustom.NN_batch_size;

                                if (i%2==1){
                                    model = null;
                                    java.lang.System.gc();
                                    Log.d(TAG, "GCGCGC");
                                    try {
                                        Thread.sleep(1000);
                                    } catch (InterruptedException e) {
                                        e.printStackTrace();
                                    }

                                    loadNNModel();
                                }

                                temp = null;
                                java.lang.System.gc();
                            }

private void NN_classify(INDArray imageMat) {

        int result;

        DataNormalization scaler = new ImagePreProcessingScaler(0, 1);
        scaler.transform(imageMat);

        INDArray output = model.output(imageMat);

        Log.d(LOG_TAG, "output.size(): " + output.size(0) + ", " + output.size(1));

        double prob_parasitemic = 0, prob_uninfected = 0;

        for (int i = 0; i < output.size(0); i++) {
            prob_parasitemic = output.getDouble(i, 0);
            prob_uninfected = output.getDouble(i, 1);
            Log.d(LOG_TAG, "prob_parasitemic: " + i + " " + output.getDouble(0, 0));
            Log.d(LOG_TAG, "prob_uninfected: " + i + " " + output.getDouble(0, 1));
        }

        if (prob_parasitemic > prob_uninfected) {
            result = 2;
        } else {
            result = 1;
        }

        Log.e(LOG_TAG, "Result: " + result);

        imageMat = null;
        output = null;
        java.lang.System.gc();
    }
Cœur
  • 32,421
  • 21
  • 173
  • 232
Hang
  • 189
  • 1
  • 13
  • Perhaps the training data should be sent to a service (which can be scaled), and the learned model is returned to the device. – Andrew S Feb 13 '18 at 17:40

1 Answers1

1

java.lang.System.gc() does not do what you think it does. It is not like free in C or delete in C++.

It is more of a "suggestion" to the JVM to run garbage collection. It does not force garbage collection or freeing of memory.

Reference this: https://stackoverflow.com/a/66573/9241296

It states,

I wouldn't depend on it in your code. If the JVM is about to throw an OutOfMemoryError, calling System.gc() won't stop it, because the garbage collector will attempt to free as much as it can before it goes to that extreme.

Parker
  • 51
  • 1
  • 6
  • Realistically, it may just require that much memory. Try setting the batch to 1 item, see how much memory it uses. If it's using X memory and you don't have enough memory for 2X, then you cannot logically put another item into the batch. – Parker Feb 13 '18 at 17:24