7

I'm building a Docker image on my build server (using TeamCity). After the build is done I want to take the image and deploy it to some server (staging, production).

All tutorials i have found either

  • push the image to some repository where it can be downloaded (pulled) by the server(s) which in small projects introduce additional complexity
  • use Heroku-like approach and build the images "near" or at the machine where it will be run

I really think that nothing special should be done at the (app) servers. Images, IMO, should act as closed, self-sufficient binaries that represent the application as a whole and can be passed between build server, testing, Q&A etc.

However, when I save a standard NodeJS app based on the official node repository it has 1.2 GB. Passing such a file from server to server is not very comfortable.

Q: Is there some way to export/save and "upload" just the changed parts (layers) of an image via SSH without introducing the complexity of a Docker repository? The server would then pull the missing layers from the public hub.docker.com in order to avoid the slow upload from my network to the cloud.

Investingating the content of a saved tarfile it should not be difficult from a technical point of view. The push command does basically just that - it never uploads layers that are already present in the repo.

Q2: Do you think that running a small repo on the docker-host that I'm deploying to in order to achieve this is a good approach?

mleko
  • 9,108
  • 5
  • 41
  • 68
tillda
  • 16,950
  • 16
  • 48
  • 69

1 Answers1

-1

If your code can live on Github or BitBucket why not just use DockerHub Automated builds for free. That way on you node you just have to docker pull user/image. The github repository and the dockerhub automated build's can both be private so you don't have to expose your code to the world. Although you may have to pay for more than one private repository or build.

If you do still want to build your own images then when you run the build command you see out put similar to the following:

Step 0 : FROM ubuntu
 ---> c4ff7513909d
Step 1 : MAINTAINER Maluuba Infrastructure Team <infrastructure@maluuba.com>
 ---> Using cache
 ---> 858ff007971a
Step 2 : EXPOSE 8080
 ---> Using cache
 ---> 493b76d124c0
Step 3 : RUN apt-get -qq update
 ---> Using cache
 ---> e66c5ff65137

Each of the hashes e.g. ---> c4ff7513909d are intermediate layers. You can find folders which named with that hash at /var/lib/docker/graph, for example:

ls /var/lib/docker/graph | grep c4ff7513909d
c4ff7513909dedf4ddf3a450aea68cd817c42e698ebccf54755973576525c416

As long as you copy all the intermediate layers to your deployment server you won't need an external docker repository. If you are only changing one of the intermediate layers you only need to recopy that one for a redeployment. If you notice that the steps listed in the DockerFile each lead to an intermediate layer. As long as you only change the last line in the DockerFile you will only need to upload one layer. Therefor I would recommend putting your ADD code line at the end of your docker file.

ADD MyGeneratedCode /var/my_generated_code
Usman Ismail
  • 16,170
  • 13
  • 72
  • 158