In a deployment bash script, I have two hosts:
localhost
, which is a machine, that typically builds docker images.$REMOTE_HOST
, which is believed to be some production web server.
And I need to transfer locally built docker image to $REMOTE_HOST
, in most efficient way (fast, reliable, private, storage-friendly). Up to day, I have following command in my streaming script:
docker save $IMAGE_NAME :latest | ssh -i $KEY_FILE -C $REMOTE_HOST docker load
This has following PROS:
- Utilizes "compression-on-the-fly"
- Does not stores intermediate files on both source and destination
- Does direct transfer (images may be private), that also reduces upload time and stays "green", in another broader terms.
However, the CONS are also on checkerboard: When you are involved in transferring larger images, you dont know operation progress. So you have to wait unknown, but sensible time, that you cant estimate. I heard that progress can be tracked with kinda rsync --progress
command
But rsync seems to transfer files, and is not working good with my ol'UNIX-style commands. Of couse you can docker load
from some file, but how to avoid it?
How can I utilize piping, to preserve above advantages? (Or is there another special tool do copy build image to remote docker host, which shows progress?)