9

I've been looking into Docker and I understand from this post that running multiple docker containers is meant to be fast because they share kernel level resources through the "LXC Host," however, I haven't found any documentation about how this relationship works that is specific to the docker configuration, and at what level are resources shared.

What's the involvement of the Docker image and the Docker container with shared resources and how are resources shared?

Edit:

When talking about "the kernel" where resources are shared, which kernel is this? Does it refer to the host O.S (the level at which the docker binary lives) or does it refer to the kernel of the image the container is based on? Won't containers based on different linux distributions need to run on different types of kernels?

Edit 2:

One final edit to make my question a little more clear, I'm curious as to whether or not docker really does not run the full O.S of the image as they suggest on this page under the "How is Docker different then a VM"

The following statement seems to contradict the diagram above, taken from here:

A container consists of an operating system, user-added files, and meta-data. As we've seen, each container is built from an image.

Community
  • 1
  • 1
blankenshipz
  • 366
  • 2
  • 8
  • 1
    Docker containers all run in the same kernel, vs a VM which runs a kernel per guest. So, in terms of which resources in the kernel are shared... really, that would be *absolutely everything*, except those items which are namespaced away from each other (non-shared mounts, process tree entries, cgroups, etc). – Charles Duffy Oct 05 '14 at 00:25
  • If you want an in-depth understanding, pretty much any LXC introduction will serve. – Charles Duffy Oct 05 '14 at 00:26
  • I've amended my answer to cover your edits as well, hope that makes the host/cont separation clearer – Peter R Oct 09 '14 at 10:42

2 Answers2

5

Strictly speaking Docker no longer has to use LXC, the user tools. It does still use the same underlying technologies with their in house container library, libcontainer. Actually Docker can use various system tools for the abstraction between process and kernel: enter image description here The kernel need not be different for different distributions - but you cannot run a non-linux OS. The kernel of the host and of the containers is the same but it supports a sort of context awareness to separate these from one another.

Each container does contain a separate OS in every way beyond the kernel. It has its own user-space applications / libraries and for all intents and purposes it behaves as though it has its own kernel.

Mecki
  • 106,869
  • 31
  • 201
  • 225
Peter R
  • 397
  • 1
  • 10
  • Thanks @peter-r, does the container run all the processes of the image O.S or just those docker determines are needed? e.g Does docker spin up init or just whatever processes it needs to run? – blankenshipz Oct 09 '14 at 16:17
  • @blankenshipz Well there are differences in how everything is setup but I'm not sure in detail what these are - basically a setup required to get things working properly which is why you can't just drop any linux image into a directory and run it. The container stuff must be configured on the host side too, `lxc-create` handles all of this, or whatever creates stuff with docker under `libcontainer` – Peter R Oct 09 '14 at 19:06
  • @blankenshipz Each container is started with its own init. In the case of LXC this is started by `lxc-start` on the host. From the host you can see the processes of all containers but from the containers you see only their processes. – Peter R Oct 09 '14 at 19:10
2

It's not so much a question of which resources are shared as which resources aren't shared. LXC works by setting up namespaces with restricted visibility -- into the process table, into the mount table, into network resources, etc -- but anything that isn't explicitly restricted and namespaced is shared.

This means, of course, that the backends for all these components are also shared -- you aren't needing to pretend to have a different set of page tables per guest, because you aren't pretending to run more than one kernel; it's all the same kernel, all the same memory allocation pools, all the same hardware devices doing bit-twiddling (vs all the overhead of emulating hardware for a VM, and having each guest separately twiddle its virtual devices); the same block caches; etc etc etc.

Frankly, the question is almost too broad to be answered, as the only real answer as to what is shared is "almost everything", and to how it's shared is "by not doing duplicate work in the first place" (as conventional VMs do by emulating hardware rather than sharing just one kernel interacting with the real hardware). This is also why kernel exploits are so dangerous in LXC-based systems -- it's all one kernel, so there's no nontrivial distinction between ring 0 in one container and ring 0 in another.

Charles Duffy
  • 235,655
  • 34
  • 305
  • 356