20

I'm pretty new with Docker and i wanted to map the node_modules folder on my computer (for debugging purpose).

This is my docker-compose.yml

web:
  build: .
  ports:
    - "3000:3000"
  links:
    - db
  environment:
    PORT: 3000
  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

I'm with Docker for Mac. When i run docker-compose up -d all go right, but it create a node_modules folder on my computer but it's empty. I go into the bash of my container and ls node_modules, all the packages was there.

How can i get the content on the container on my computer too?

Thank you

Mike Boutin
  • 5,015
  • 11
  • 34
  • 63

5 Answers5

25

First, there's an order of operations. When you build your image, volumes are not mounted, they only get mounted when you run the container. So when you are finished with the build, all the changes will only exist inside the image, not in any volume. If you mount a volume on a directory, it overlays whatever was from the image at that location, hiding those contents from view (with one initialization exception, see below).


Next is the volume syntax:

  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules

tells docker-compose to create a host volume from the current directory to /usr/src/app inside the container, and then to map /usr/src/app/node_modules to an anonymous volume maintained by docker. The latter will appear as a volume in docker volume ls with a long uuid string that is relatively useless.

To map /usr/src/app/node_modules to a folder on your host, you'll need to include a folder name and colon in front of that like you have on the line above. E.g. /host/dir/node_modules:/usr/src/app/node_modules.

Named volumes are a bit different than host volumes in that docker maintains them with a name you can see in docker volume ls. You reference these volumes with just a name instead of a path. So node_modules:/usr/src/app/node_modules would create a volume called node_modules that you can mount in a container with just that name.

I diverged to describe named volumes because they come with a feature that turns into a gotcha with host volumes. Docker helps you out with named volumes by initializing them with the contents of the image at that location. So in the above example, if the named volume node_modules is empty (or new), it will first copy the contents of the image at /usr/src/app/node_modules` to this volume and then mount it inside your container.

With host volumes, you will never see any initialization, whatever is at that location, even an empty directory, is all you see in the container. There's no way to get contents from the image at that directory location to first copy out to the host volume at that location. This also means that directory permissions needed inside the container are not inherited automatically, you need to manually set the permissions on the host directory that will work inside the container.


Finally, there's a small gotcha with docker for windows and mac, they run inside a VM, and your host volumes are mounted to the VM. To get the volume mounted to the host, you have to configure the application to share the folder in your host to the VM, and then mount the volume in the VM into the container. By default, on Mac, the /Users folder is included, but if you use other directories, e.g. a /Projects directory, or even a lower case /users (unix and bsd are case sensitive), you won't see the contents from your Mac inside the container.


With that base knowledge covered, one possible solution is to redesign your workflow to get the directory contents from the image copied out to the host. First you need to copy the files to a different location inside your image. Then you need to copy the files from that saved image location to the volume mount location on container startup. When you do the latter, you should note that you are defeating the purpose of having a volume (persistence) and may want to consider adding some logic to be more selective about when you run the copy. To start, add an entrypoint.sh to your build that looks like:

#!/bin/sh
# copy from the image backup location to the volume mount
cp -a /usr/src/app_backup/node_modules/* /usr/src/app/node_modules/
# this next line runs the docker command
exec "$@"

Then update your Dockerfile to include the entrypoint and a backup command:

FROM node:6.3

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Bundle app source
COPY . /usr/src/app
RUN cp -a /usr/src/app/. /usr/src/app_backup

EXPOSE 1234
ENTRYPOINT [ "/usr/src/app/entrypoint.sh" ]
CMD [ "npm", "start" ]

And then drop the extra volume from your docker-compose.yml:

  volumes:
    - .:/usr/src/app
BMitch
  • 148,146
  • 27
  • 334
  • 317
  • I think you need to fix how the volume is mounted. Not to only drop the extra volume. As I posted in my answer. – Robert May 05 '17 at 22:03
  • If they want `node_modules` to be saved in `./node_modules` the above works. Otherwise, yes, they need to specify a different volume mount as you've shown. – BMitch May 05 '17 at 22:08
  • If I'm not wrong, specifiing a volume as that, it creates an anonymous volume. It lacks the local (host) directory. – Robert May 05 '17 at 22:14
  • The bottom volume inside a docker-compose.yml doesn't. The top volume section is me copying from the question and then explaining that it creates an anonymous volume. If there's a better way to phrase that, let me know. – BMitch May 05 '17 at 22:16
  • In the part `Next is the volume syntax:`, it's actually placed in current directory as BMitch mentioned, not the anonymous volume, but there is no more details in docker documentation, this still be confusing me. – Tokenyet Jan 28 '19 at 11:49
  • 1
    @Tokenyet `.:/usr/src/app` bind mounts the current directory as a volume. `/usr/src/app/node_modules` creates an anonymous volume. https://success.docker.com/article/different-types-of-volumes – BMitch Jan 28 '19 at 12:41
24

TL;DR Working example, clone and try: https://github.com/xbx/base-server


You need a node_modules in your computer (outside image) for debugging purposes first (before run the container).

If you want debug only node_modules:

volumes:
    - /path/to/node_modules:/usr/src/app/node_modules

If you want debug both your code and the node_modules:

volumes:
    - .:/usr/src/app/

Remember that you will need run npm install at least one time outside the container (or copy the node_modules directory that the docker build generates). Let me now if you need more details.


Edit. So, without the need of npm in OSX, you can:

  1. docker build and then docker cp <container-id>:/path/to/node-modules ./local-node-modules/. Then in your docker-compose.yml mount those files and troubleshot whatever you want.
  2. Or, docker build and there (Dockerfile) do the npm install in another directory. Then in your command (CMD or docker-compose command) do the copy (cp) to the right directory, but this directory is mounted empty from your computer (a volume in the docker-compose.yml) and then troubleshot whatever you want.

Edit 2. (Option 2) Working example, clone and try: https://github.com/xbx/base-server I did it all automatically in this repo forked from the yours.

Dockerfile

FROM node:6.3

# Install app dependencies
RUN mkdir /build-dir
WORKDIR /build-dir
COPY package.json /build-dir
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN ln -s /build-dir/node_modules node_modules

# Bundle app source
COPY . /usr/src/app

EXPOSE 1234
CMD [ "npm", "start" ]

docker-compose.yml

web:
  build: .
  ports:
    - "1234:1234"
  links:
    - db # liaison avec la DB
  environment:
    PORT: 1234
  command: /command.sh
  volumes:
    - ./src/:/usr/src/app/src/
    - ./node_modules:/usr/src/app/node_modules
    - ./command.sh:/command.sh
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

command.sh

#!/bin/bash

cp -r /build-dir/node_modules/ /usr/src/app/

exec npm start

Please, clone my repo and do docker-compose up. It does what you want. PS: It can be improved to do the same in a better way (ie best practices, etc)

I'm in OSX and it works for me.

a.barbieri
  • 1,810
  • 22
  • 47
Robert
  • 25,401
  • 6
  • 72
  • 85
  • 8
    Since `npm install` is platform dependent, running it on the host might lead to cross-platform issues (host=mac, container=debian). – gesellix May 05 '17 at 18:40
  • 1
    Seems like you're suggesting to manually copy the `npm install` results to the volume. Is there a reason you prefer doing it manually rather than automatically as part of the build and entrypoint like I posted in my answer? – BMitch May 05 '17 at 20:40
  • Will that symbolic link work when you mount in an empty volume? This is starting to look very similar to the answer I posted earlier. – BMitch May 05 '17 at 21:18
  • The symbolic link is only to run de container in the case that you don't use the volume. Have you `docker compose up` what I did? – Robert May 05 '17 at 21:24
  • I've sended a fix. https://github.com/xbx/base-server/commit/8f67bf9bd77eeb291d516c61995f0a10b00b295f – Robert May 05 '17 at 21:30
  • @BMitch, I'm sorry. I just realized that you posted first the same solution as the mine (essentially the same). Credits for you then – Robert May 05 '17 at 21:59
  • Oh, woops, I thought the link was the other way around, now I'm following. I like this variant, saves one copy inside the image. – BMitch May 05 '17 at 22:03
  • Yes @BMitch. A copy of tipically a lot of files (node_modules) – Robert May 05 '17 at 22:06
  • 2
    I like this solution, I didn't think about a bash script to copy the node_modules after the volume was mounted. Thanks a lot for your help! – Alessandro Cappello May 08 '17 at 15:08
  • 2
    Have someone solved the issue with `node_modules`? I don't want to install them on my host because of the possible cross-platform issues (@gesellix wrote about this above too). Is it possible to install `node_modules` inside the Docker container and mirror them to host so that I could take a look at the sources when I need, and so that my IDE could see all the `devDependencies` like `eslint` and others? – Vladyslav Turak Jun 28 '18 at 15:04
  • I tried your solution, after everything runs in the end it says `/usr/local/bin/docker-entrypoint.sh: exec: line 8: /start.sh: not found`. `start.sh` is just the `comand.sh`. If I run the two commands ONE by ONE, I am able to run them but they don't run together. If I run script, it gives that error, if I do `bash -c "cp-command && yarn start"`, it says `/app/bash` not found, can you please help? – saadi Jun 02 '20 at 07:35
1

I added upon @Robert's answer, as there were a couple of things not taken into consideration with it; namely:

  • cp takes too long and the user can't view the progress.
  • I want node_modules to be overwritten if it were installed through the host machine.
  • I want to be able to git pull while the container is running and not running and update node_modules accordingly, should there be any changes.
  • I only want this behavior during the development environment.

To tackle the first issue, I installed rsync on my image, as well as pv (because I want to view the progress while deleting as well). Since I'm using alpine, I used apk add in the Dockerfile:

# Install rsync and pv to view progress of moving and deletion of node_modules onto host volume.
RUN apk add rsync && apk add pv

I then changed the entrypoint.sh to look like so (you may substitute yarn.lock with package-lock.json):

#!/bin/ash

# Declaring variables.
buildDir=/home/node/build-dir
workDir=/home/node/work-dir
package=package.json
lock=yarn.lock
nm=node_modules

#########################
# Begin Functions
#########################

copy_modules () { # Copy all files of build directory to that of the working directory.
  echo "Calculating build folder size..."
  buildFolderSize=$( du -a $buildDir/$nm | wc -l )
  echo "Copying files from build directory to working directory..."
  rsync -avI $buildDir/$nm/. $workDir/$nm/ | pv -lfpes "$buildFolderSize" > /dev/null
  echo "Creating flag to indicate $nm is in sync..."
  touch $workDir/$nm/.docked # Docked file is a flag that tells the files were copied already from the build directory.
}

delete_modules () { # Delete old module files.
    echo "Calculating incompatible $1 direcotry $nm folder size..."
    folderSize=$( du -a $2/$nm | wc -l )
    echo "Deleting incompatible $1 directory $nm folder..."
    rm -rfv $2/$nm/* | pv -lfpes "$folderSize" > /dev/null # Delete all files in node_modules.
    rm -rf $2/$nm/.* 2> /dev/null # Delete all hidden files in node_modules.node_modules.
}

#########################
# End Functions
# Begin Script
#########################

if cmp -s $buildDir/$lock $workDir/$lock >/dev/null 2>&1 # Compare lock files.
  then
    # Delete old modules.
    delete_modules "build" "$buildDir"
    # Remove old build package.
    rm -rf $buildDir/$package 2> /dev/null
    rm -rf $buildDir/$lock 2> /dev/null
    # Copy package.json from working directory to build directory.
    rsync --info=progress2 $workDir/$package $buildDir/$package
    rsync --info=progress2 $workDir/$lock $buildDir/$lock
    cd $buildDir/ || return
    yarn
    delete_modules "working" "$workDir"
    copy_modules

# Check if the directory is empty, as it is when it is mounted for the first time.
elif [ -z "$(ls -A $workDir/$nm)" ]
  then
    copy_modules
elif [ ! -f "$workDir/$nm/.docked" ] # Check if modules were copied from build directory.
  then
    # Delete old modules.
    delete_modules "working" "$workDir"
    # Copy modules from build directory to working directory.
    copy_modules
else
    echo "The node_modules folder is good to go; skipping copying."
fi

#########################
# End Script
#########################

if [ "$1" != "git" ] # Check if script was not run by git-merge hook.
  then
    # Change to working directory.
    cd $workDir/ || return
    # Run yarn start command to start development.
    exec yarn start:debug
fi

I added pv to, at least, show the user the progress of what is happening. Also, I added a flag to appear to indicate that node_modules was installed through a container.

Whenever a package is installed, I utilized the postinstall and postuninstall hooks of the package.json file to copy the package.json and yarn.lock files from the working directory to the build directory to keep them up to date. I also installed the postinstall-postinstall package to make sure the postuninstall hook works.

"postinstall"  : "if test $DOCKER_FLAG = 1; then rsync -I --info=progress2 /home/node/work-dir/package.json /home/node/build-dir/package.json && rsync -I --info=progress2 /home/node/work-dir/yarn.lock /home/node/build-dir/yarn.lock && echo 'Build directory files updated.' && touch /home/node/work-dir/node_modules/.docked; else rm -rf ./node_modules/.docked && echo 'Warning: files installed outside container; deleting docker flag file.'; fi",
"postuninstall": "if test $DOCKER_FLAG = 1; then rsync -I --info=progress2 /home/node/work-dir/package.json /home/node/build-dir/package.json && rsync -I --info=progress2 /home/node/work-dir/yarn.lock /home/node/build-dir/yarn.lock && echo 'Build directory files updated.' && touch /home/node/work-dir/node_modules/.docked; else rm -rf ./node_modules/.docked && echo 'Warning: files installed outside container; deleting docker flag file.'; fi",

I used an environment variable called DOCKER_FLAG and set it to 1 in the docker-compose.yml file. That way, it won't run when someone installs outside a container. Also, I made sure to remove the .docked flag file so the script knows it has been installed using host commands.

As for the issue of synchronizing node_modules every time a pull occurs, I used a git hook; namely, the post-merge hook. Every time I pull, it will attempt to run the entrypoint.sh script if the container is running. It will also give an argument to the script git that the script checks to not run exec yarn:debug, as the container is already running. Here is my script at .git/hooks/post-merge:

#!/bin/bash

if [ -x "$(command -v docker)" ] && [ "$(docker ps -a | grep <container_name>)" ]
then
  exec docker exec <container_name> sh -c "/home/node/build-dir/entrypoint.sh git"
  exit 1
fi

If the container is not running, and I fetched the changes, then the entrypoint.sh script will first check if there are any differences between the lock files, and if there are, it will reinstall in the build directory, and do what it did when the image was created and container run in the first time. This tutorial may be used to be able to share hooks with teammates.


Note: Be sure to use docker-compose run..., as docker-compose up... won't allow for the progress indicators to appear.

yaharga
  • 1,743
  • 2
  • 25
  • 56
1

The simplest solution

Configure the node_modules volume to use your local node_modules directory as its storage location using Docker Compose and the Local Volume Driver with a Bind Mount.

First, add your node_modules volume to your service:

ui:
  volumes:
    - node_modules:/path/to/node_modules

Then, configure the volume, in the named volumes section:

volumes:
  node_modules:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./local/path/to/node_modules

Just make sure you always make node_module changes inside the Docker container, and it will be synchronized perfectly and available on the host for IDEs, code completion, debugging, etc.

JeremyM4n
  • 335
  • 2
  • 8
  • Thank you, this is exactly what I wanted. Could you provide a brief explanation of the volume driver_opts? – e-e Apr 08 '21 at 03:43
  • 1
    @e-e Since the driver is set to local in this case, the driver_opts are the options for the local driver. "type" is none here because we're using the host filesystem, otherwise it could be set to "nfs", or "cifs", etc. "o", short for "opt", a.k.a. "options" is a coma separated list of driver options, in this case "bind" to create a bind mount. And "device" is the storage location for the volume. – JeremyM4n Apr 08 '21 at 18:31
0

change:

  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules

TO:

  volumes:
    - .:/usr/src/app

And it will put the node_modules in your local mapped volume. The way you have it, the /usr/src/app/node_modules will be stored in different volume that you would need to docker inspect {container-name} to find the location. If you do want to specify the location, specify it like:

- /path/to/my_node_modules:/usr/src/app/node_modules

ldg
  • 7,967
  • 2
  • 23
  • 43
  • I tried it at first, but if i don't have `- /usr/src/app/node_modules` in my docker-copose.yml volumes, my app doesn't find the packages. It's like they never installed. If i add it, my node js file works, but the folder stay empty. – Mike Boutin Jul 17 '16 at 22:03
  • can you post your Dockerfile for the node app? – ldg Jul 17 '16 at 22:50