434

I'm trying to create a Docker container that acts like a full-on virtual machine. I know I can use the EXPOSE instruction inside a Dockerfile to expose a port, and I can use the -p flag with docker run to assign ports, but once a container is actually running, is there a command to open/map additional ports live?

For example, let's say I have a Docker container that is running sshd. Someone else using the container ssh's in and installs httpd. Is there a way to expose port 80 on the container and map it to port 8080 on the host, so that people can visit the web server running in the container, without restarting it?

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
reberhardt
  • 4,443
  • 3
  • 11
  • 7
  • 1
    You can [give the container a routable IP](http://stackoverflow.com/a/43244221/1318694), so no port mapping is required. – Matt May 03 '17 at 22:04

16 Answers16

359

You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.

If you have a container with something running on its port 8000, you can run

wget http://container_ip:8000

To get the container's IP address, run the 2 commands:

docker ps
docker inspect container_name | grep IPAddress

Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.

To expose the container's port 8000 on your localhost's port 8001:

iptables -t nat -A  DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000

One way you can work this out is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy).

NOTE: this is subverting docker, so should be done with the awareness that it may well create blue smoke.

OR

Another alternative is to look at the (new? post 0.6.6?) -P option - which will use random host ports, and then wire those up.

OR

With 0.6.5, you could use the LINKs feature to bring up a new container that talks to the existing one, with some additional relaying to that container's -p flags? (I have not used LINKs yet.)

OR

With docker 0.11? you can use docker run --net host .. to attach your container directly to the host's network interfaces (i.e., net is not namespaced) and thus all ports you open in the container are exposed.

Pang
  • 8,605
  • 144
  • 77
  • 113
SvenDowideit
  • 4,690
  • 1
  • 18
  • 10
  • 6
    This doesn't appear to work with docker 1.3.0 at least. The DOCKER DNAT rule is created when running docker with -p, but adding it manually doesn't seem to allow connections. Oddly deleting the rule while a container is running doesn't seem to stop it from working either... – silasdavis Oct 27 '14 at 17:00
  • 4
    Thanks. I was lulled into a sense of security that un-exposed ports were safe. – seanmcl Nov 25 '14 at 21:17
  • Automating things and using jq + sed, it may be useful: `CONTAINER_IP=$(docker inspect container_name | jq .[0].NetworkSettings.IPAddress | sed -r 's/\"([^\"]+)\"/\1/''])'); iptables -t nat -A DOCKER -p tcp --dport 8001 -j DNAT --to-destination ${CONTAINER_IP}:8000` – ericson.cepeda Aug 21 '15 at 02:11
  • 5
    @ericson.cepeda Instead of invoing `jq` and `sed` you can use `-f` option of `docker inspect`: `CONTAINER_IP=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' container_name)` – pwes Dec 01 '15 at 09:08
  • for those that prefer http://kmkeen.com/jshon/ `jshon -e 0 -e NetworkSettings -e Networks -e bridge -e IPAddress -u` – slf Jun 13 '16 at 16:57
  • I get this error "The term 'iptables' is not recognized as the name of a cmdlet," – Blue Clouds Sep 22 '18 at 08:27
  • For mac check this https://serverfault.com/a/105736/428925 – Vishrant Oct 10 '20 at 23:19
143

Here's what I would do:

  • Commit the live container.
  • Run the container again with the new image, with ports open (I'd recommend mounting a shared volume and opening the ssh port as well)
sudo docker ps 
sudo docker commit <containerid> <foo/live>
sudo docker run -i -p 22 -p 8000:80 -m /data:/data -t <foo/live> /bin/bash
Felipe Pereira
  • 1,008
  • 11
  • 24
bosky101
  • 2,066
  • 1
  • 16
  • 10
  • 37
    The key part of my question is that this needs to happen without restarting the container... Shifting to the new container may keep files, but will effectively kill any running processes, and will be similar to a reboot on a physical machine. I need to do this without that happening. Thank you though! – reberhardt Feb 02 '14 at 03:36
  • alright. i'd compare it more though to starting a parallel instance. since both are now running (old and new) it may be easier to proxy to the new container, once the necessary migrations are done. – bosky101 Feb 22 '14 at 12:11
  • Yes, parallel instances and reverse proxying are some of the top reasons I love Docker. However, in this scenario, I need to preserve all running processes in the container that may have been started via SSH. While the executables will be preserved in committing the image and starting a parallel instance, the executables themselves won't be started and anything in RAM will be lost. – reberhardt Feb 23 '14 at 18:20
  • 7
    Why do you run `sudo docker` and not just `docker`? – Thiago Figueiro Mar 09 '16 at 22:55
  • This tip helped me a lot because I installed tons of packages in a slow network. By running `docker commit` I was ready to test the application again instead of spending hours to reinstall everything. – gustavohenke Aug 30 '16 at 18:48
  • I realise this is very old by now, but the asker mentions "processes ... have been started via SSH". If you're able to SSH to your container, you can use SSH Port Forwarding to open up as many ports as you like (without restarting the container) - just log out of SSH and back in, or worst case log in a second time to the container. – Ralph Bolton May 17 '21 at 15:51
61

While you cannot expose a new port of an existing container, you can start a new container in the same Docker network and get it to forward traffic to the original container.

# docker run \
  --rm \
  -p $PORT:1234 \
  verb/socat \
    TCP-LISTEN:1234,fork \
    TCP-CONNECT:$TARGET_CONTAINER_IP:$TARGET_CONTAINER_PORT

Worked Example

Launch a web-service that listens on port 80, but do not expose its internal port 80 (oops!):

# docker run -ti mkodockx/docker-pastebin   # Forgot to expose PORT 80!

Find its Docker network IP:

# docker inspect 63256f72142a | grep IPAddress
                    "IPAddress": "172.17.0.2",

Launch verb/socat with port 8080 exposed, and get it to forward TCP traffic to that IP's port 80:

# docker run --rm -p 8080:1234 verb/socat TCP-LISTEN:1234,fork TCP-CONNECT:172.17.0.2:80

You can now access pastebin on http://localhost:8080/, and your requests goes to socat:1234 which forwards it to pastebin:80, and the response travels the same path in reverse.

RobM
  • 7,365
  • 2
  • 40
  • 37
  • 8
    Pretty clever! Yet, probably better to use `verb/socat:alpine`, since its image has 5% of the footprint (unless you run into [libc or DNS incompatibilities](https://github.com/gliderlabs/docker-alpine/blob/master/docs/caveats.md)). – jpaugh Jul 10 '17 at 18:48
  • 3
    There's also `alpine/socat` – Jamby Oct 28 '17 at 09:23
  • 3
    Excellent answer. Simple, one liner that will get it done without any dirty hacks. – Bruno Brant Apr 25 '18 at 14:22
  • 4
    This worked perfectly for me, thanks! I had to add `--net myfoldername_default` to my `verb/socat` launch command since I started the un-exposed container in a docker composition which creates a network. – emazzotta Dec 28 '18 at 13:24
37

IPtables hacks don't work, at least on Docker 1.4.1.

The best way would be to run another container with the exposed port and relay with socat. This is what I've done to (temporarily) connect to the database with SQLPlus:

docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus

Dockerfile:

FROM debian:7

RUN apt-get update && \
    apt-get -y install socat && \
    apt-get clean

USER nobody

CMD socat -dddd TCP-LISTEN:1521,reuseaddr,fork TCP:db:1521
MiniGod
  • 3,363
  • 1
  • 23
  • 26
Ricardo Branco
  • 379
  • 3
  • 2
  • 2
    The iptables hacks are supposed to run from the host machine, not the docker container. It's essentially forwarding requests to certain ports on the host into the appropriate docker container ports. It's not docker specific at all, and you can do it to completely different hosts. Can you add code formatting to your Docker file? It seems pretty useful. – AusIV Mar 03 '15 at 19:52
  • 1
    February 2016: Running Docker 1.9.1, **this was the only solution that worked successfully for me.** None of the IPTables solutions worked. It's worth advising to use the same `FROM` base image as your DB container, for efficient use of resources. – Excalibur Feb 13 '16 at 23:11
  • 4
    You may want to give [apline/socat](https://hub.docker.com/r/alpine/socat/) a try. It comes with socat preinstalled and accepts socat options as command so you don't need to write a Dockerfile at all. – trkoch May 30 '18 at 08:56
37

Here's another idea. Use SSH to do the port forwarding; this has the benefit of also working in OS X (and probably Windows) when your Docker host is a VM.

docker exec -it <containterid> ssh -R5432:localhost:5432 <user>@<hostip>
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Scott Carlson
  • 3,624
  • 1
  • 13
  • 11
8

I had to deal with this same issue and was able to solve it without stopping any of my running containers. This is a solution up-to-date as of February 2016, using Docker 1.9.1. Anyway, this answer is a detailed version of @ricardo-branco's answer, but in more depth for new users.

In my scenario, I wanted to temporarily connect to MySQL running in a container, and since other application containers are linked to it, stopping, reconfiguring, and re-running the database container was a non-starter.

Since I'd like to access the MySQL database externally (from Sequel Pro via SSH tunneling), I'm going to use port 33306 on the host machine. (Not 3306, just in case there is an outer MySQL instance running.)

About an hour of tweaking iptables proved fruitless, even though:

Step by step, here's what I did:

mkdir db-expose-33306
cd db-expose-33306
vim Dockerfile

Edit dockerfile, placing this inside:

# Exposes port 3306 on linked "db" container, to be accessible at host:33306
FROM ubuntu:latest # (Recommended to use the same base as the DB container)

RUN apt-get update && \
    apt-get -y install socat && \
    apt-get clean

USER nobody
EXPOSE 33306

CMD socat -dddd TCP-LISTEN:33306,reuseaddr,fork TCP:db:3306

Then build the image:

docker build -t your-namespace/db-expose-33306 .

Then run it, linking to your running container. (Use -d instead of -rm to keep it in the background until explicitly stopped and removed. I only want it running temporarily in this case.)

docker run -it --rm --name=db-33306 --link the_live_db_container:db -p 33306:33306  your-namespace/db-expose-33306
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Excalibur
  • 3,019
  • 2
  • 22
  • 30
8

To add to the accepted answer iptables solution, I had to run two more commands on the host to open it to the outside world.

HOST> iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination 172.17.0.2:443
HOST> iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source 172.17.0.2 --destination 172.17.0.2 --dport https
HOST> iptables -A DOCKER -j ACCEPT -p tcp --destination 172.17.0.2 --dport https

Note: I was opening port https (443), my docker internal IP was 172.17.0.2

Note 2: These rules and temporrary and will only last until the container is restarted

dstj
  • 3,955
  • 2
  • 34
  • 49
  • This worked for me... however there were some Cavat's later. As per note 2, this will stop if the containers restarted, it may be on a different IP then. But more importantally docker has no idea about this iptable entries, so will not removed them when you later fully restart the service and get docker to do it properly. The result was multiple iptable entries that were exactly the same, which caused it to fail with little to no errors or indication as to the cause. Once extra rules were ruled, problem dissapeared. In otherwords, look over your IP tables VERY carefully, after any change. – anthony May 21 '20 at 02:22
5

You can use SSH to create a tunnel and expose your container in your host.

You can do it in both ways, from container to host and from host to container. But you need a SSH tool like OpenSSH in both (client in one and server in another).

For example, in the container, you can do

$ yum install -y openssh openssh-server.x86_64
service sshd restart
Stopping sshd:                                             [FAILED]
Generating SSH2 RSA host key:                              [  OK  ]
Generating SSH1 RSA host key:                              [  OK  ]
Generating SSH2 DSA host key:                              [  OK  ]
Starting sshd:                                             [  OK  ]
$ passwd # You need to set a root password..

You can find the container IP address from this line (in the container):

$ ifconfig eth0 | grep "inet addr" | sed 's/^[^:]*:\([^ ]*\).*/\1/g'
172.17.0.2

Then in the host, you can just do:

sudo ssh -NfL 80:0.0.0.0:80 root@172.17.0.2
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
ton
  • 2,505
  • 1
  • 29
  • 31
3

There is a handy HAProxy wrapper.

docker run -it -p LOCALPORT:PROXYPORT --rm --link TARGET_CONTAINER:EZNAME -e "BACKEND_HOST=EZNAME" -e "BACKEND_PORT=PROXYPORT" demandbase/docker-tcp-proxy

This creates an HAProxy to the target container. easy peasy.

kwerle
  • 1,584
  • 18
  • 21
3

In case no answer is working for someone - check if your target container is already running in docker network:

CONTAINER=my-target-container
docker inspect $CONTAINER | grep NetworkMode
        "NetworkMode": "my-network-name",

Save it for later in the variable $NET_NAME:

NET_NAME=$(docker inspect --format '{{.HostConfig.NetworkMode}}' $CONTAINER)

If yes, you should run the proxy container in the same network.

Next look up the alias for the container:

docker inspect $CONTAINER | grep -A2 Aliases
                "Aliases": [
                    "my-alias",
                    "23ea4ea42e34a"

Save it for later in the variable $ALIAS:

ALIAS=$(docker inspect --format '{{index .NetworkSettings.Networks "'$NET_NAME'" "Aliases" 0}}' $CONTAINER)

Now run socat in a container in the network $NET_NAME to bridge to the $ALIASed container's exposed (but not published) port:

docker run \
    --detach --name my-new-proxy \
    --net $NET_NAME \
    --publish 8080:1234 \
    alpine/socat TCP-LISTEN:1234,fork TCP-CONNECT:$ALIAS:80
Jeremy W. Sherman
  • 34,925
  • 5
  • 73
  • 108
ktalik
  • 777
  • 6
  • 20
3

Here are some solutions:

https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/12

The solution to mapping port while running the container.

docker run -d --net=host myvnc

that will expose and map the port automatically to your host

Ijaz Ahmad Khan
  • 8,495
  • 5
  • 32
  • 59
2

You can use an overlay network like Weave Net, which will assign a unique IP address to each container and implicitly expose all the ports to every container part of the network.

Weave also provides host network integration. It is disabled by default but, if you want to also access the container IP addresses (and all its ports) from the host, you can run simply run weave expose.

Full disclosure: I work at Weaveworks.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
fons
  • 4,057
  • 2
  • 25
  • 46
1

Read Ricardo's response first. This worked for me.

However, there exists a scenario where this won't work if the running container was kicked off using docker-compose. This is because docker-compose (I'm running docker 1.17) creates a new network. The way to address this scenario would be

docker network ls

Then append the following docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus --net network_name

ice.nicer
  • 1,174
  • 11
  • 13
1

Based on Robm's answer I have created a Docker image and a Bash script called portcat.

Using portcat, you can easily map multiple ports to an existing Docker container. An example using the (optional) Bash script:

curl -sL https://raw.githubusercontent.com/archan937/portcat/master/script/install | sudo bash
portcat my-awesome-container 3456 4444:8080

And there you go! Portcat is mapping:

  • port 3456 to my-awesome-container:3456
  • port 4444 to my-awesome-container:8080

Please note that the Bash script is optional, the following commands:

ipAddress=$(docker inspect my-awesome-container | grep IPAddress | grep -o '[0-9]\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}' | head -n 1)
docker run -p 3456:3456 -p 4444:4444 --name=alpine-portcat -it pmelegend/portcat:latest $ipAddress 3456 4444:8080

I hope portcat will come in handy for you guys. Cheers!

Paul Engel
  • 33
  • 7
0

It's not possible to do live port mapping but there are multiple ways you can give a Docker container what amounts to a real interface like a virtual machine would have.

Macvlan Interfaces

Docker now includes a Macvlan network driver. This attaches a Docker network to a "real world" interface and allows you to assign that networks addresses directly to the container (like a virtual machines bridged mode).

docker network create \
    -d macvlan \
    --subnet=172.16.86.0/24 \
    --gateway=172.16.86.1  \
    -o parent=eth0 pub_net

pipework can also map a real interface into a container or setup a sub interface in older versions of Docker.

Routing IP's

If you have control of the network you can route additional networks to your Docker host for use in the containers.

Then you assign that network to the containers and setup your Docker host to route the packets via the docker network.

Shared host interface

The --net host option allows the host interface to be shared into a container but this is probably not a good setup for running multiple containers on the one host due to the shared nature.

Community
  • 1
  • 1
Matt
  • 51,189
  • 6
  • 117
  • 122
0

I wrote a blog post that explains how to access an unpublished port of a container In different ways depending on the needs:

  • by committing a new image and running a new container,
  • by using socat to avoid restarting the container.

The post also goes through a brief introduction of both how port mapping works, the difference between exposing and publishing a port, and what is socat.

Here’s the link: https://lmcaraig.com/accessing-an-unpublished-port-of-a-running-docker-container

se7entyse7en
  • 3,592
  • 4
  • 26
  • 42