5

Currently I have 3 linode servers:

1: Cache server (Ubuntu, varnish)

2: App server (Ubuntu, nginx, rabbitmq-server, python, php5-fpm, memcached)

3: DB server (Ubuntu, postgresql + pg_bouncer)

On my app-server I have multiple sites (topdomains). Each site is inside a virtualenviroment created with virtualenvwrapper. Some sites are big with a lot of traffic, and some site are small with little traffic.

A typical site consist of python (django), celery (beat, flower) and gunicorn.

My current development pattern now is working inside a staging environment on the app-server and committing changes to git. Then change environment to the production environment and doing a git pull, and a ./manage.py migrate and restarting the process doing sudo supervisorctl restart sitename:, but this takes time! There must be a simpler method!

Therefore it seems like docker could help simplify everything, but I can't decide the best approach for how I could manage all my sites and containers inside each site.

I have looked at http://panamax.io and https://github.com/progrium/dokku, but not sure if one of them fit my needs.

Ideally I would run a development version of each site on my local machine (emulating cache-server, app-server and db-server), do code changes there and test them. When I would see the changes worked, I would execute a command that would do all the heavy lifting and send the changes to the linode servers (I would think mostly the app-server), do all the migration and restarting the project on the server.

Could anyone point me in the right direction as how to achieve this?

Tomas Jacobsen
  • 2,179
  • 6
  • 32
  • 66

1 Answers1

4

I have faced the same problem. I don't claim this is the best possible answer and am interested to see what others have come up with.

There doesn't seem to be any really turnkey solution on Docker yet.

It's also been frustrating that most of the 'Django+Docker' tutorials just focus on a single Django site, so they bundle up the webserver and everything in the same Docker container. I think if you have multiple sites on a server you want them to share a single webserver, but this quickly gets more complicated than presented in the tutorials, which are no longer much help.

Roughly what I came up with is this:

  • using Fig to manage containers and complicated Docker config that would be tedious to type as commandline options all the time
  • sites are Django, on uWSGI+Nginx (no reason you couldn't have others though)
  • I have a git repo per site, plus a git repo for the 'server'
  • separate containers for db, nginx and each site
  • each site container has it's own uWSGI instance... I do some config switching so I can either bring up a 'dev' container with uWSGI as acting standalone web server, or a 'live' container where the uWSGI socket is exposed to the main Nginx container, which then takes over as front-side web server.
  • I'm not sure yet how useful the 'dev' uWSGI servers are, I might switch to just running Django dev server and sharing my local code dir as a volume in the container, so I can edit and get live reloading.
  • In the 'server' repo I have all the shared Dockerfiles, for Nginx server, base uWSGI app etc.
  • In the 'server' repo I have made Fabric tasks to do my deployment (checkout server and site repos on the server, build docker images, run fig up etc).

Speaking of deployment, frankly I'm not much keen on the Docker Registry idea. This seems to mean you have to upload hundreds of megabytes of image file to the registry server each time you want to deploy a new container version. This sucks if you are on a limited bandwidth connection at the time and seems very inefficient.

That's why so far I decided to deploy new code via Git and build the new images on the server. I don't use a Docker Registry at all (apart from the public one for a base Ubuntu image). This seems to go against the grain of Docker practice a bit so I'm curious for feedback.

I'd strongly recommend getting stuck in and building your own solution first. If you have to spend time learning a solution like Dokku, Panamax etc that may or may not work for you (I don't think any of them are really ready yet) you may as well spend that time learning Docker directly... it will then be easier to evaluate solutions further down the line.

I tried to get on with Dokku early on in my search but had to abandon because it's not compatible with boot2docker... which means on OS X you're faced with the 'fun' of setting up your own VirtualBox vm to run the Docker daemon. It didn't seem worth the hassle of this when I wasn't certain I wanted to be stuck with how Dokku works at the end of the day.

Anentropic
  • 26,635
  • 9
  • 86
  • 130
  • I think the key to being able to run multiple sites in a single VM with Docker is to use one container for Nginx and one container for each Django app. Then you can instruct Nginx to serve each virtualhost (by name) using the appropriate app container as upstream. – dukebody Feb 05 '15 at 08:46
  • at the moment I have 1 nginx as rp + a combination of nginx+uwsgi containers for each site (they are sharing postgres). we are currently deploying through docker registry (build locally, test the images, push/pull through registry) - there is no code on the host, host is just a shell to run docker images, the nginx config for the proxy is volume mounted and can be changed, all other sites are defined in docker-compose.yml files – Vincent De Smet Mar 04 '15 at 04:39
  • How would you share apps between sites with this configuration? – blissini Dec 23 '15 at 19:37
  • 1
    @blissini if you mean Django apps then I'd say you don't 'share' them between sites, each site is in its own container and will `pip install` the django apps that it needs. Exactly the same as if each site was in a separate virtualenv locally. If you mean services like postgres, redis etc, then yes they can definitely be shared between sites... this is managed by your `docker-compose` (formerly `fig`) configuration. – Anentropic Dec 23 '15 at 22:42
  • @Anentropic I'm talking about my own custom apps, that I want to share between sites. As they are not pip-installable, should that be duplicate code (at least in production)? – blissini Jan 07 '16 at 20:44
  • well I'd avoid duplicate code at all costs. you could make them pip installable (don't have to publish on pypi, could be installed from git repo) or you could use git submodules – Anentropic Jan 08 '16 at 02:12
  • Interesting solution. I thought about using docker containers simply for injecting my django app source code into the apache container which could then run the wsgi app. I like your soultion better, as the django code is actually executed inside the django container. How do you serve your static and media files though? Are they handled by uWSGI as well, or did you configure nginx to serve them somehow? – Tim Mar 04 '16 at 09:07
  • @TimSchneider for static files I'm using [whitenoise](http://whitenoise.evans.io/en/latest/django.html) and serving from Django via uWSGI. For media files you need some non-volatile storage. I am not comfortable trying to do that with docker volumes, it's too easy to make a mistake and lose them. I am using S3 via [django-storages](https://django-storages.readthedocs.org/en/latest/) – Anentropic Mar 04 '16 at 12:24
  • I'd also note that since I wrote this answer the docker world has moved on a little... for example `docker-machine` has made my Fabric scripts obsolete, as I can run `docker build` commands from my local cli, but against the remote docker daemon - this is great, I don't need to upload the large image files, just the local 'build context' (`.dockerignore` file analogous to `.gitignore` is your friend here) and the image is built remotely – Anentropic Mar 04 '16 at 12:28