12

I have Kubernetes set up and running a grpc service in a pod. I am successfully hitting an endpoint on the service, which has a print() statement in it, but I see no logs in the log file. I have seen this before when I was running a (cron) job in Kubernetes and the logs only appeared after the job was done (as opposed to when the job was running). Is there a way to make kubernetes write to the log file right away? Any setting that I can put (either cluster-level or just for the pod)? Thanks for any help in advance!

HeronAlgoSearch
  • 1,075
  • 2
  • 9
  • 19
  • Is this related to your other question? This sounds even more like it could be our grpc service isn't sending to stdout – thisguy123 May 16 '17 at 06:10

3 Answers3

22

Found the root cause. Specifically, found it at Python app does not print anything when running detached in docker . The solution is to set the following environmental variable: PYTHONUNBUFFERED=0 . It was not that the print statement was not being displayed, it was that the print statement was being buffered. Doing the above will solve the issue.

Community
  • 1
  • 1
HeronAlgoSearch
  • 1,075
  • 2
  • 9
  • 19
  • 2
    Thanks a ton! The same happened to me with a C program (solved with `fflush(stdout);`, thanks to https://stackoverflow.com/questions/1716296/why-does-printf-not-flush-after-the-call-unless-a-newline-is-in-the-format-strin) – Janaka Bandara Sep 02 '17 at 07:13
  • 3
    As a sidenote, you can also start python with the '-u' flag for unbuffered output. E.g. ```/bin/python3 -u /path/to/script.py``` – Brian Sizemore Mar 21 '18 at 13:54
  • 3
    While the solution works, I would prefer a value of `1` since any non-empty string will enable the `PYTHONUNBUFFERED` option. Using value `0` looks like you want turn the option off instead. – joente Apr 28 '20 at 11:15
6

Here is an example of K8S deployment yaml so you can copy paste the solution from the aforementioned answer:

root@k8s:~/python_docker$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa
spec:
  selector:
    matchLabels:
      app: hpa
  template:
    metadata:
      labels:
        app: hpa
    spec:
      containers:
      - name: hpa
        image: my-hpa/py
        env:
        - name: PYTHONUNBUFFERED
          value: "0"
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
Andrej
  • 79
  • 1
  • 2
0

One possibility is that the container is starved for CPU. We have run into this issue when running locally on minikube with resource limits that enforced in our larger cluster. Try bumping the CPU resource limits on your pod. Below is an example yaml.

If your CPU limits are around 20-40m, that might be too low to run a full flask/python app. You might try bumping it to closer to 100m. It's not going to crush your local machine.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-app
spec:
  selector:
    matchLabels:
      app: python-app
  template:
    metadata:
      labels:
        app: python-app
    spec:
      containers:
      - name: python-app
        image: python-app
        imagePullPolicy: Never
        resources:
          requests:
            cpu: 40m
            memory: 40Mi
          limits:
            cpu: 20m
            memory: 20Mi
thealmightygrant
  • 691
  • 1
  • 5
  • 11