123

update: this question is related to Google Colab's "Notebook settings: Hardware accelerator: GPU". This question was written before the "TPU" option was added.

Reading multiple excited announcements about Google Colaboratory providing free Tesla K80 GPU, I tried to run fast.ai lesson on it for it to never complete - quickly running out of memory. I started investigating of why.

The bottom line is that “free Tesla K80” is not "free" for all - for some only a small slice of it is "free".

I connect to Google Colab from West Coast Canada and I get only 0.5GB of what supposed to be a 24GB GPU RAM. Other users get access to 11GB of GPU RAM.

Clearly 0.5GB GPU RAM is insufficient for most ML/DL work.

If you're not sure what you get, here is little debug function I scraped together (only works with the GPU setting of the notebook):

# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
 process = psutil.Process(os.getpid())
 print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
 print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()

Executing it in a jupyter notebook before running any other code gives me:

Gen RAM Free: 11.6 GB  | Proc size: 666.0 MB
GPU RAM Free: 566MB | Used: 10873MB | Util  95% | Total 11439MB

The lucky users who get access to the full card will see:

Gen RAM Free: 11.6 GB  | Proc size: 666.0 MB
GPU RAM Free: 11439MB | Used: 0MB | Util  0% | Total 11439MB

Do you see any flaw in my calculation of the GPU RAM availability, borrowed from GPUtil?

Can you confirm that you get similar results if you run this code on Google Colab notebook?

If my calculations are correct, is there any way to get more of that GPU RAM on the free box?

update: I'm not sure why some of us get 1/20th of what other users get. e.g. the person who helped me to debug this is from India and he gets the whole thing!

note: please don't send any more suggestions on how to kill the potentially stuck/runaway/parallel notebooks that might be consuming parts of the GPU. No matter how you slice it, if you are in the same boat as I and were to run the debug code you'd see that you still get a total of 5% of GPU RAM (as of this update still).

stason
  • 3,487
  • 2
  • 22
  • 38
  • Any solution to this? why do i get different results when doing !cat /proc/meminfo – MiloMinderbinder Feb 19 '18 at 04:09
  • Yep, same problem, just around 500 mb of GPU ram...misleading description :( – Naveen Apr 10 '18 at 08:31
  • 3
    Try IBM open source data science tools(cognitiveclass.ai) as they also have a free GPU with jupyter notebooks. – A Q Jun 24 '18 at 11:14
  • I've rolled back this question to a state where there's actually a *question* in it. If you've done more research and found an answer, the appropriate place for that is in the answer box. It is incorrect to update the question with a solution. – Chris Hayes Aug 24 '18 at 00:30
  • @ChrisHayes, I understand your intention, but this is not right, since your rollback deleted a whole bunch of relevant details that are now gone. If you'd like to suggest a better wording that better fits the rules of this community please do so, but otherwise please revert your rollback. Thank you. p.s. I already did post the [answer](https://stackoverflow.com/a/51178965/9201239). – stason Aug 24 '18 at 01:02
  • @stason your printm() is wonderful but unfortunately, google seemed to have made some structural changes so it doesn't work (nvidia-smi not found). Can you post an updated version? – Agile Bean Nov 29 '18 at 13:03
  • @AgileBean, it's because you're on the TPU setup. `printm()` is for the Nvidia GPU setup. I updated the question to reflect that. Thank you for the heads up. – stason Nov 30 '18 at 01:07
  • Have you tried setting the TensorFlow sessions configuration? e.g. `gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=1` and `sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)` – user3731622 Mar 26 '19 at 22:49
  • Thank you - but the issue has absolutely nothing to do with TF. I'm not sure why you thought it has anything to do with TF. – stason Mar 27 '19 at 02:38
  • I get Gen RAM Free: 26.3 GB | Proc size: 349.4 MB GPU RAM Free: 16280MB | Used: 0MB | Util 0% | Total 16280MB! Does this mean I have 26.3 GB to use? – chikitin Nov 11 '19 at 05:31

9 Answers9

48

So to prevent another dozen of answers suggesting invalid in the context of this thread suggestion to !kill -9 -1, let's close this thread:

The answer is simple:

As of this writing Google simply gives only 5% of GPU to some of us, whereas 100% to the others. Period.

dec-2019 update: The problem still exists - this question's upvotes continue still.

mar-2019 update: A year later a Google employee @AmiF commented on the state of things, stating that the problem doesn't exist, and anybody who seems to have this problem needs to simply reset their runtime to recover memory. Yet, the upvotes continue, which to me this tells that the problem still exists, despite @AmiF's suggestion to the contrary.

dec-2018 update: I have a theory that Google may have a blacklist of certain accounts, or perhaps browser fingerprints, when its robots detect a non-standard behavior. It could be a total coincidence, but for quite some time I had an issue with Google Re-captcha on any website that happened to require it, where I'd have to go through dozens of puzzles before I'd be allowed through, often taking me 10+ min to accomplish. This lasted for many months. All of a sudden as of this month I get no puzzles at all and any google re-captcha gets resolved with just a single mouse click, as it used to be almost a year ago.

And why I'm telling this story? Well, because at the same time I was given 100% of the GPU RAM on Colab. That's why my suspicion is that if you are on a theoretical Google black list then you aren't being trusted to be given a lot of resources for free. I wonder if any of you find the same correlation between the limited GPU access and the Re-captcha nightmare. As I said, it could be totally a coincidence as well.

stason
  • 3,487
  • 2
  • 22
  • 38
  • 5
    Your statement of "As of this writing Google simply gives only 5% of GPU to some of us, whereas 100% to the others. Period." is incorrect - Colab has never worked this way. All diagnosed cases of users seeing less than the full complement of GPU RAM available to them have boiled down to another process (started by the same user, possibly in another notebook) using the rest of the GPU's RAM. – Ami F Mar 22 '19 at 22:37
  • 13
    Future readers: if you think you're seeing this or similar symptoms of GPU RAM unavailability, "Reset all runtimes" in the Runtime menu will get you a fresh VM guaranteeing no stale processes are still holding on to GPU RAM. If you still see this symptom immediately after using that menu option please file a bug at https://github.com/googlecolab/colabtools/issues – Ami F Mar 22 '19 at 22:37
  • Your reality is clearly different from the reality of many others who continue to vote up this post a year later after it was created. It's very likely that some users indeed encounter what you described, but this is not the case for all. So I'm not sure how your statement helps here. Besides when someone asked this exact question in the repo your recommended, he got a BS answer and his ticket was closed: https://github.com/googlecolab/colabtools/issues/52 – stason Mar 23 '19 at 00:49
  • 4
    In case it was unclear: I'm not describing what I believe the implementation is based on observation of the system's behavior as a user. I'm describing what I directly know the implementation to be. I posted hoping that users who see less than full availability report it as an issue (either user error or system bug) instead of reading the incorrect statements above and assuming things are working as intended. – Ami F Mar 24 '19 at 01:28
  • 1
    In other words you're saying you're a Google employee and you're implying that colab stopped discriminating users and from now on if a user doesn't get 100% GPU RAM on their first connect and not due to some previous usage by the same user, there must be a bug in your system, which you request to report. And you will actually look at the problem and not deal with it like I have shown in [this example](https://github.com/googlecolab/colabtools/issues/52) where one of you lied to the user giving him an incorrect reason for the problem. @AmiF. – stason Mar 24 '19 at 15:17
  • 2
    No, GPUs have never been shared, and there are no lies in the example you linked (simply a guess at and explanation of the far-and-away most-common reason for the symptom reported). – Ami F Mar 24 '19 at 20:08
23

Last night I ran your snippet and got exactly what you got:

Gen RAM Free: 11.6 GB  | Proc size: 666.0 MB
GPU RAM Free: 566MB | Used: 10873MB | Util  95% | Total 11439MB

but today:

Gen RAM Free: 12.2 GB  I Proc size: 131.5 MB
GPU RAM Free: 11439MB | Used: 0MB | Util   0% | Total 11439MB

I think the most probable reason is the GPUs are shared among VMs, so each time you restart the runtime you have chance to switch the GPU, and there is also probability you switch to one that is being used by other users.

UPDATED: It turns out that I can use GPU normally even when the GPU RAM Free is 504 MB, which I thought as the cause of ResourceExhaustedError I got last night.

Nguyễn Tài Long
  • 559
  • 1
  • 6
  • 12
  • 1
    I think I re-connected probably 50 times over the period of a few days and I was always getting the same 95% usage to start with. Only once I saw 0%. In all those attempts I was getting cuda out of memory error once it was coming close to 100%. – stason Feb 16 '18 at 04:40
  • What do you mean with your update? Can you still run stuff with 500Mb? I have the same problem, I am getting `RuntimeError: cuda runtime error (2) : out of memory at /pytorch/torch/lib/THC/generated/../THCTensorMathCompare.cuh:84` – ivan_bilan Mar 11 '18 at 21:03
7

If you execute a cell that just has
!kill -9 -1
in it, that'll cause all of your runtime's state (including memory, filesystem, and GPU) to be wiped clean and restarted. Wait 30-60s and press the CONNECT button at the top-right to reconnect.

Ajaychhimpa1
  • 127
  • 6
3

Restart Jupyter IPython Kernel:

!pkill -9 -f ipykernel_launcher
mkczyk
  • 1,731
  • 1
  • 17
  • 23
2

Find the Python3 pid and kill the pid. Please see the below imageenter image description here

Note: kill only python3(pid=130) not jupyter python(122).

Manivannan Murugavel
  • 1,071
  • 11
  • 11
2

Im not sure if this blacklisting is true! Its rather possible, that the cores are shared among users. I ran also the test, and my results are the following:

Gen RAM Free: 12.9 GB  | Proc size: 142.8 MB
GPU RAM Free: 11441MB | Used: 0MB | Util   0% | Total 11441MB

It seems im getting also full core. However i ran it a few times, and i got the same result. Maybe i will repeat this check a few times during the day to see if there is any change.

desertnaut
  • 46,107
  • 19
  • 109
  • 140
Kregnach
  • 45
  • 10
2

just give a heavy task to google colab, it will ask us to change to 25 gb of ram.

enter image description here

example run this code twice:

import numpy as np
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.advanced_activations import LeakyReLU
from keras.datasets import cifar10
(train_features, train_labels), (test_features, test_labels) = cifar10.load_data()
model = Sequential()

model.add(Conv2D(filters=16, kernel_size=(2, 2), padding="same", activation="relu", input_shape=(train_features.shape[1:])))
model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))

model.add(Conv2D(filters=32, kernel_size=(3, 3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))

model.add(Conv2D(filters=64, kernel_size=(4, 4), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))

model.add(Flatten())

model.add(Dense(25600, activation="relu"))
model.add(Dense(25600, activation="relu"))
model.add(Dense(25600, activation="relu"))
model.add(Dense(25600, activation="relu"))
model.add(Dense(10, activation="softmax"))

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

model.fit(train_features, train_labels, validation_split=0.2, epochs=10, batch_size=128, verbose=1)

then click on get more ram :) enter image description here enter image description here

enter image description here

Jainil Patel
  • 1,040
  • 2
  • 15
  • I can confirm this. I had a 15 gig dataset of mostly HD pictures (my drive has 30 gigs instead of 15gigs) and I ran my code to resize the image dataset to 224,224,3 and I was switched to a high RAM runtime. Then when I began training RAM usage went up to 31.88gigs. – Anshuman Kumar Feb 25 '20 at 06:40
  • But I would like to add that once I finished that job, I have not been able to access another GPU/TPU for the past 24 hours. It is possible I was blacklisted. – Anshuman Kumar Feb 25 '20 at 07:01
  • @AnshumanKumar , give the high load in beginning only otherwise on changing configuration you will lose previously done work which in ram. I didn't used high configuration for 24 hour so I don't know about blacklisting. – Jainil Patel Feb 26 '20 at 20:02
  • Yes, that did happen with me. However the work got done. – Anshuman Kumar Feb 27 '20 at 03:21
1

I believe if we have multiple notebooks open. Just closing it doesn't actually stop the process. I haven't figured out how to stop it. But I used top to find PID of the python3 that was running longest and using most of the memory and I killed it. Everything back to normal now.

Ritwik G
  • 336
  • 1
  • 8
0

Google Colab resource allocation is dynamic, based on users past usage. Suppose if a user has been using more resources recently and a new user who is less frequently uses Colab, he will be given relatively more preference in resource allocation.

Hence to get the max out of Colab , close all your Colab tabs and all other active sessions, reset the runtime of the one you want to use. You'll definitely get better GPU allocation.