114

How to control host from docker container?

For example, how to execute copied to host bash script?

Alex Ushakov
  • 1,299
  • 2
  • 8
  • 7
  • 9
    wouldn't that be exactly the opposite of isolating host from docker? – Marcus Müller Aug 23 '15 at 06:52
  • 46
    Yes. But it's sometimes necessary. – Alex Ushakov Aug 23 '15 at 08:17
  • possible duplicate of [Execute host commands from within a docker container](http://stackoverflow.com/questions/31720935/execute-host-commands-from-within-a-docker-container) – Marcus Müller Aug 23 '15 at 08:50
  • Not sure about "control host" but I was recently at a talk by data scientists who are using docker to run scripts to process huge workloads (using AWS mounted GPUs) and output the result to the host. A very interesting use case. Essentially scripts packaged with a reliable execution environment thanks to docker – KCD Jun 20 '16 at 01:59
  • @KCD And why they prefer app-containerization via docker instead of using system-level containers (LXC)? – Alex Ushakov Jun 23 '16 at 14:33
  • @AlexUshakov I presume when spinning up X nodes for N hours then destroying them the benefits are in the orchestration of the environment to ensure it is identical to dev (except the size of the input data). It solves dependency hell ... but I cannot comment on LXC. I understand they often dedicate the entire machine/VM (and GPU) to one container which performs comparably to running on the bare VM. I'm no data scientist but I found these examples https://github.com/saiprashanths/dl-docker or http://www.emergingstack.com/2016/01/10/Nvidia-GPU-plus-CoreOS-plus-Docker-plus-TensorFlow.html – KCD Jun 23 '16 at 22:23
  • Maybe I'm trying to do it in the wrong way, but here's what I'd like to achieve: 1. there's a "docker package", on some repo, that contains a folder with `docker-compose.yml` and few other files 2. I git-clone this repo, cd into it's directory and fire `docker-compose up` 3. as the result I get: - A web-server with nginx/php-fpm/mysql stuff - A working directory with a project code on my *host* system - … which is also mounted to some folder on the webserver. I believe that getting project code implies to run few commands on the host from within Dockerfile? – pilat May 16 '17 at 11:14

12 Answers12

70

The solution I use is to connect to the host over SSH and execute the command like this:

ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"

UPDATE

As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).

Mohammed Noureldin
  • 9,423
  • 12
  • 58
  • 77
68

Used a named pipe. On the host os, create a script to loop and read commands, and then you call eval on that.

Have the docker container read to that named pipe.

To be able to access the pipe, you need to mount it via a volume.

This is similar to the SSH mechanism (or a similar socket based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.

My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also be cautious about the fact that you are evaling, so just give the permission model a thought.

Some of.the other answers such as running a script.under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.

  • 24
    ATTENTION: This is the right/best answer, and it needs a little more praise. Every other answer is fiddling with asking "what you're trying to do" and making exceptions for stuff. I have a very specific use-case that requires me to be able to do this, and this is the only good answer imho. SSH above would require lowering security/firewall standards, and the docker run stuff is just flat out wrong. Thanks for this. I assume this doesn't get as many upvotes because it's not a simple copy/paste answer, but this is the answer. +100 points from me if I could – Farley Nov 01 '18 at 08:19
  • 4
    For those looking for some more info, you can use the following script running on the host machine: https://unix.stackexchange.com/a/369465 Of course, you'll have to run it with 'nohup' and create some kind of supervisor wrapper in order to maintain it alive (maybe use a cron job :P) – sucotronic Jan 31 '19 at 12:42
  • I created a diagram to illustrate a use case: https://imgur.com/a/9Wkxqu9 – sucotronic Jan 31 '19 at 12:58
  • 8
    This might be a good answer. However, it would be much better if you give more details and some more command line explanation. Is it possible to elaborate? – Mohammed Noureldin Feb 21 '19 at 09:33
  • 5
    Upvoted, This works! Make a named pipe using 'mkfifo host_executor_queue' where the volume is mounted. Then to add a consumer which executes commands that are put into the queue as host's shell, use 'tail -f host_executor_queue | sh &'. The & at the end makes it run in the background. Finally to push commands into the queue use 'echo touch foo > host_executor_queue' - this test creates a temp file foo at home directory. If you want the consumer to start at system startup, put '@reboot tail -f host_executor_queue | sh &' in crontab. Just add relative path to host_executor_queue. – skybunk May 01 '19 at 15:00
  • as a followup, my consumer on host machine kept dying for some reason. Just added nohup to the command. Its '@reboot nohup tail -f host_executor_queue | sh &' that keeps it running. see (https://unix.stackexchange.com/a/32580/350867) – skybunk May 01 '19 at 15:33
  • This would be a good answer if it gave an example of how to do it. It's just a description with no links to any relevant content. Not a very good answer, but just a nudge in the right direction. – TetraDev Dec 31 '19 at 17:43
  • Read teh pipe and eval: https://github.com/BradfordMedeiros/automate_firmware/blob/96c72090034c22c4ce78807d38c9c698a25206f6/0_x_install_automate/read_pipe.sh Write to teh pipe: https://github.com/BradfordMedeiros/automate_firmware/blob/96c72090034c22c4ce78807d38c9c698a25206f6/0_x_install_automate/write_pipe.sh If you dig around you can see building the docker image/running it with the args, etc. – Bradford Medeiros Aug 10 '20 at 21:59
  • @LucasPottersky this should be basically be done by the person who poster the answer. – Mohammed Noureldin Oct 05 '20 at 18:47
30

That REALLY depends on what you need that bash script to do!

For example, if the bash script just echoes some output, you could just do

docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like

docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.

Paul Becotte
  • 7,950
  • 2
  • 24
  • 39
  • 1
    I had idea to make container that connects to the host and creates new containers. – Alex Ushakov Aug 23 '15 at 14:08
  • 1
    Docker doesn't seem to like your relative mount. This should work `docker run --rm -v $(pwd)/mybashscript.sh:/work/mybashscript.sh ubuntu /work/mybashscript.sh` – KCD Jun 20 '16 at 03:09
  • Thanks for the catch, I fixed the post! – Paul Becotte Jun 20 '16 at 14:49
  • 5
    The first line starts a new ubuntu container, and mounts the script where it can be read. It does not allow the container access to the host filesystem, for instance. The second line exposes the host's `/usr/bin` to the container. In neither case does the container have full access to the host system. Maybe I'm wrong, but it seems like a bad answer to a bad question. – Paul Aug 03 '17 at 03:08
  • 3
    Fair enough- the question was pretty vague. The question didn't ask for "full access to the host system". As described, if the bash script is only intended to echo some output, it wouldn't NEED any access to the host filesystem. For my second example, which was installing docker-compose, the only permission you need is access to the bin directory where the binary gets stored. As I said in the beginning- to do this you would have to have very specific ideas about what the script is doing to allow the right permissions. – Paul Becotte Aug 04 '17 at 13:23
  • 1
    The question as I interpreted it is that it is intended for the host to run the script, not the container. So docker run is not the answer. Something like allowing the container to ssh to the host and run the script is the answer. I didn't even notice that @MohammedNoureldin has the right answer and is almost voted over the accepted answer.. I will help him do that. – parity3 Jan 24 '18 at 19:28
  • But... A docker container is not a VM. Everything it does is on the host system running in the host kernel. Depending on the flags a container can run any process and modify any part of the host system. – Paul Becotte Jan 24 '18 at 19:57
  • 1
    I know its an old question, but for example. If `mybashscript.sh` echoes let's say the MAC address (something hardware specific), even though it gets invoked in the container, would the output be the same as if I were to run the script directly in a terminal on the host machine? Or does this method just give me access to the script, and the output would be exactly as if I had run the script in the container? – Jabari Dash Feb 06 '18 at 11:39
  • MAC address depends on the network settings. By default, each container gets its own ip address and mac address. However, using --network=host would connect the container to the host's network with no network isolation, and that command would output the same in the container as on the host. – Paul Becotte Feb 06 '18 at 16:28
  • is it possible to run a shell on the host from a container if the container has access to the host docker.sock? – Zibri Dec 13 '18 at 14:01
  • @AlexUshakov I know this question is quite old but I have the same use case, I came across http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ which suggested sharing the docker socket with your container. like this `docker run -v /var/run/docker.sock:/var/run/docker.sock ...` – Matt Bucci Mar 05 '19 at 19:53
  • A container can control the host dicker daemin (and launch sibling containers) so long as it has access to the docker socket, has a docker client installed, and has the 'priviliged' flag. Remember that this basically gives the container root access to the host. You can also run a new instance of containerd inside the container, and there has been some progress on running docker without root permissions. – Paul Becotte Mar 06 '19 at 20:28
  • 3
    Tried this, the script is executed in container, not on host – All2Pie Sep 23 '20 at 07:16
  • Yes- but the container isn't a separate thing. It is a process running on the host in a chroot and with a permissions namespace. When you do 'docker run' it launches a process and sets up permissions on what files it can see and things it can do. Its not the default, but you can give the process full root permission on the host as well as mount the host filesystem inside the container filesystem. You shouldn't, but you can. So if you know exactly what the script needs to do, you can setup your container to have all the needed permissions. – Paul Becotte Sep 25 '20 at 14:42
30

This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so the credit goes to him.

In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.

I have to admit I didn't know what are named pipes at the time I read his solution. So I struggled to implement it (while it's actually really simple), but I did succeed, so I'm happy to help by explaining how I did it. So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.

PART 1 - Testing the named pipe concept without docker

On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:

mkfifo /path/to/pipe/mypipe

The pipe is created. Type

ls -l /path/to/pipe/mypipe 

And check the access rights start with "p", such as

prw-r--r-- 1 root root 0 mypipe

Now run:

tail -f /path/to/pipe/mypipe

The terminal is now waiting for data to be sent into this pipe

Now open another terminal window.

And then run:

echo "hello world" > /path/to/pipe/mypipe

Check the first terminal (the one with tail -f), it should display "hello world"

PART 2 - Run commands through the pipe

On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:

eval "$(cat /path/to/pipe/mypipe)"

Then, from the other terminal, try running:

echo "ls -l" > /path/to/pipe/mypipe

Go back to the first terminal and you should see the result of the ls -l command.

PART 3 - Make it listen forever

You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.

Instead of eval "$(cat /path/to/pipe/mypipe)", run:

while true; do eval "$(cat /path/to/pipe/mypipe)"; done

(you can nohup that)

Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.

PART 4 - Make it work even when reboot happens

The only caveat is if the host has to reboot, the "while" loop will stop working.

To handle reboot, here what I've done:

Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header

Don't forget to chmod +x it

Add it to crontab by running

crontab -e

And then adding

@reboot /path/to/execpipe.sh

At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed. Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.

Another option is to modify the script to put the output in a file, such as:

while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done

Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.

PART 5 - Make it work with docker

If you are using both docker compose and dockerfile like I do, here is what I've done:

Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container

Add this:

VOLUME /hostpipe

in your dockerfile in order to create a mount point

Then add this:

volumes:
   - /path/to/pipe:/hostpipe

in your docker compose file in order to mount /path/to/pipe as /hostpipe

Restart your docker containers.

PART 6 - Testing

Exec into your docker container:

docker exec -it <container> bash

Go into the mount folder and check you can see the pipe:

cd /hostpipe && ls -l

Now try running a command from within the container:

echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe

And it should work!

WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).

For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.

PART 7 - Example from Node.JS container

Here is how I send a command from my node js container to the main host and retrieve the output:

const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"

console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)

console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()

console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
    if (Date.now() - timeoutStart > timeout) {
        clearInterval(myLoop);
        console.log("timed out")
    } else {
        //if output.txt exists, read it
        if (fs.existsSync(outputPath)) {
            clearInterval(myLoop);
            const data = fs.readFileSync(outputPath).toString()
            if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
            console.log(data) //log the output of the command
        }
    }
}, 300);
Dharman
  • 21,838
  • 18
  • 57
  • 107
Vincent
  • 1,762
  • 8
  • 15
  • 1
    This works nicely. What about security? I want to use this to start/stop docker containers from within a running container? Do I just make a dockeruser without any privileges except for running docker commands? – Kristof van Woensel Sep 29 '20 at 21:44
  • @Vincent could you know how to run command in php? I try ```shell_exec('echo "mkdir -p /mydir" > /path/mypipe')``` but this not working. Any idea? – JanuszO Mar 03 '21 at 19:19
  • of course the command works in a container, but not from php code – JanuszO Mar 03 '21 at 19:20
8

My laziness led me to find the easiest solution that wasn't published as an answer here.

It is based on the great article by luc juggery.

All you need to do in order to gain a full shell to your linux host from within your docker container is:

docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh

Explanation:

--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)

--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running) nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)

nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1. The whole command will then provide an interactive sh shell in the VM

This setup has major security implications and should be used with cautions (if any).

  • By far the best and easiest solution! thank you Shmulik for providing it(Yashar Koach!) – MMEL May 25 '21 at 07:19
6

Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080

#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json

PORT_NUMBER = 8080

# This class will handles any incoming request from
# the browser 
class myHandler(BaseHTTPRequestHandler):
        def do_POST(self):
                content_len = int(self.headers.getheader('content-length'))
                post_body = self.rfile.read(content_len)
                self.send_response(200)
                self.end_headers()
                data = json.loads(post_body)

                # Use the post data
                cmd = "your shell cmd"
                p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
                p_status = p.wait()
                (output, err) = p.communicate()
                print "Command output : ", output
                print "Command exit status/return code : ", p_status

                self.wfile.write(cmd + "\n")
                return
try:
        # Create a web server and define the handler to manage the
        # incoming request
        server = HTTPServer(('', PORT_NUMBER), myHandler)
        print 'Started httpserver on port ' , PORT_NUMBER

        # Wait forever for incoming http requests
        server.serve_forever()

except KeyboardInterrupt:
        print '^C received, shutting down the web server'
        server.socket.close()
cmbarbu
  • 3,878
  • 22
  • 43
Frank Chang
  • 149
  • 2
  • 5
  • IMO this is the best answer. Running arbitrary commands on the host machine MUST be done through some kind of API (e.g. REST). This is the only way that security can be enforced and running processes can be properly controlled (e.g. killing, handling stdin, stdout, exit-code, and so on). If course it would be pretty if this API could run inside Docker, but personally I don't mind to run it on the host directly. – barney765 Oct 28 '20 at 12:35
  • Please correct me if I'm wrong, but `subprocess.Popen` will *run* the script in the container, not on the host, right? (Regardless if the script's source is on the host or in the container.) – Arjan Jan 16 '21 at 13:03
  • 1
    @Arjan, if you run the above script inside a container, `Popen` will execute the command in the container as well. However, if you run the above script from the host, `Popen` will execute the command on the host. – barney765 Jan 24 '21 at 07:48
  • Thanks, @barney765. Running on the host to provide an API makes sense, like does your first comment. I guess (for me) the _"bind the port -p 8080:8080 with the container"_ is the confusing part. I assumed the `-p 8080:8080` was supposed to be part of the `docker` command, _publishing_ that API's port from the container, making me think it was supposed to be running in the container (and that `subprocess.Popen` was supposed to do the magic to run things on the host, from the container). (Future readers, see [How to access host port from docker container](https://stackoverflow.com/a/43541732).) – Arjan Jan 24 '21 at 08:23
6

If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.

Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.

You can do this by adding the following volume args to your start command

docker run -v /var/run/docker.sock:/var/run/docker.sock ...

or by sharing /var/run/docker.sock within your docker compose file like this:

version: '3'

services:
   ci:
      command: ...
      image: ...
      volumes
         - /var/run/docker.sock:/var/run/docker.sock

When you run the docker start command within your docker container, the docker server running on your host will see the request and provision the sibling container.

credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

Matt Bucci
  • 2,005
  • 1
  • 14
  • 22
  • 1
    Consider that docker must be installed in the container, otherwise you will also need to mount a volume for the docker binary (e.g. `/usr/bin/docker:/usr/bin/docker`). – Gerry Dec 13 '19 at 00:37
  • 1
    Please be carefull when mounting the docker socket in your container, this could be a serious security issue: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface – DatGuyKaj Jan 29 '20 at 10:45
  • @DatGuyKaj thanks, I've edited my answer to reflect the issues outlined by your resource. – Matt Bucci Jan 29 '20 at 19:49
  • This does not answer the question, which is about running a script on the host, not in a container – Brandon Oct 09 '20 at 16:03
1
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
# 
Zibri
  • 7,056
  • 2
  • 42
  • 38
  • 3
    While this answer might resolve the OP's question, it is suggested that you explain how it works and why it resolves the issue. This helps new developers understand what is going on and how to fix this and similar issues themselves. Thanks for contributing! – Caleb Kleveter Dec 14 '18 at 18:47
1

I have a simple approach.

Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)

Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)

docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7 sh /test.sh

test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need. Now you will be able to get host context output.

Bala
  • 267
  • 1
  • 6
  • 15
  • 3
    According to docker official documentation on networking using host network, "However, in all other ways, such as storage, process namespace, and user namespace, the process is isolated from the host." Check out - https://docs.docker.com/network/network-tutorial-host/ – Peter Mutisya Jan 13 '20 at 04:15
0

As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp

https://docs.docker.com/reference/commandline/cp/

Once a file is copied, you can run it locally

user2915097
  • 24,082
  • 5
  • 47
  • 53
  • 1
    I know it. How to run this script from, in other words, inside docker container? – Alex Ushakov Aug 23 '15 at 08:16
  • 1
    duplicate of http://stackoverflow.com/questions/31720935/execute-host-commands-from-within-a-docker-container/31721604#31721604 ? – user2915097 Aug 23 '15 at 08:37
  • 2
    @AlexUshakov: no way. Doing that would break a lot of the advantages of docker. Don't do it. Don't try it. Reconsider what you need to do. – Marcus Müller Aug 23 '15 at 08:50
  • See also Vlad's trick https://forums.docker.com/t/will-docker-cp-command-work-for-copying-files-from-host-to-a-container/2022 – user2915097 Aug 23 '15 at 09:01
  • 1
    you can always, on the host, get the value of some variable in your container, something like `myvalue=$(docker run -it ubuntu echo $PATH)` and test it regularly in a script shell (of course, you will use something else than $PATH, just is just an example), when it is some specific value, you launch your script – user2915097 Aug 23 '15 at 17:24
  • @MarcusMüller It is absolutely possible and are plenty of good reasons for doing it; build jobs and tests that leverage the normalised environment of a Docker container are a great example and a common use case (especially in complex builds for cross platform software and/or running integration tests on a system with a complex stack). – Iain Collins Jun 13 '20 at 17:01
0

You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):

#! /bin/bash

touch .command_pipe
chmod +x .command_pipe

# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
            xargs -n1 -I "{}"  .command_pipe >> .command_pipe_log  &

 docker run -it --rm  \
   --name alpine  \
   -w /home/test \
   -v $PWD/.command_pipe:/dev/command_pipe \
   alpine:3.7 sh

rm -rf .command_pipe
kill %1

In this example, inside the container send commands to /dev/command_pipe, like so:

/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe

On the host, you can check if the network was created:

$ docker network ls | grep test2
8e029ec83afe        test2.network.com                            bridge              local
-7

To expand on user2915097's response:

The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.

Yes. But it's sometimes necessary.

No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Community
  • 1
  • 1
Marcus Müller
  • 27,924
  • 4
  • 40
  • 79
  • I have use case: I have dockerized service `A` (src on github). In `A` repo I create proper hooks which after 'git pull' command create new docker image and run them (and remove old container of course). Next: github have web-hooks which allow to create POST request to arbitrary endpoint link after push on master. So I wan't create dockerized service B which will be that endpoint and which will only run 'git pull' in repo A in HOST machine (important: command 'git pull' must be executed in HOST environment - not in B environment because B cannot run new container A inside B...) – Kamil Kiełczewski May 17 '17 at 19:43
  • 1
    The problem: I want to have nothing in HOST except linux, git and docker. And i wanna have dockerizet service A and service B (which is in fact git-push handler which execute git pull on repo A after someone make git push on master). So git auto-deploy is problematic use-case – Kamil Kiełczewski May 17 '17 at 19:47
  • @KamilKiełczewski I'm trying to do exactly the same, have you found a solution? – user871784 Oct 23 '17 at 02:21
  • @user871784 - yes :) Look on this [project](https://github.com/kamil-kielczewski/bash-via-http) - study it and you will find solution. (in time when i create this project, on ubuntu was't exist tool fswatch so I use inotify-tools - however currently I heard that this tool exist so you can simplify this solution a little) – Kamil Kiełczewski Oct 23 '17 at 09:19
  • 1
    Saying, "No, that's not the case" is narrow minded and assumeds you know every use-case in the world. Our use case is running tests. They need to run in containers to correctly test the environment, but given the nature of tests, they also need to execute scripts on the host. – Senica Gonzalez Feb 01 '18 at 18:36
  • well, I don't know your use case, or a lot of use cases. I do know that containerization is a servicing or isolation approach, and that breaking that isolation is *usually* a bad idea, as it introduces exactly the kind of host dependency that you'd want to avoid when doing things in containers instead of outside. – Marcus Müller Feb 01 '18 at 19:41
  • @MarcusMüller I have a situation where the programs I need are **only** available to me if I run them in Docker. When they run, they add to a database on the host, which then has to be re-indexed. If the program were running on the host, the re-index command could be issued directly when the DB update is done. It seems I can either run that command on the host at scheduled times; or continuously scan the DB for changes; but it seems it would be more efficient for me to run the command when needed. `ssh` does not seem to be available within my docker environment. What would you suggest? – Ron Rosenfeld Feb 23 '18 at 13:35
  • 2
    **Just for those wondering why I leave a -7 answer up:** a) it's OK to be fallible. I was wrong. It's OK that this is documented here. b) The comments actually contribute value; deleting the answer would delete them, too. c) It still contributes a point of view that might be wise to consider (don't break your isolation if you don't have to. Sometimes you have to, though). – Marcus Müller Jun 21 '20 at 08:52