73

I have the problem with installing node_modules inside the Docker container and synchronize them with the host. My Docker's version is 18.03.1-ce, build 9ee9f40 and Docker Compose's version is 1.21.2, build a133471.

My docker-compose.yml looks like:

# Frontend Container.
frontend:
  build: ./app/frontend
  volumes:
    - ./app/frontend:/usr/src/app
    - frontend-node-modules:/usr/src/app/node_modules
  ports:
    - 3000:3000
  environment:
    NODE_ENV: ${ENV}
  command: npm start

# Define all the external volumes.
volumes:
  frontend-node-modules: ~

My Dockerfile:

# Set the base image.
FROM node:10

# Create and define the working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app

# Install the application's dependencies.
COPY package.json ./
COPY package-lock.json ./
RUN npm install

The trick with the external volume is described in a lot of blog posts and Stack Overflow answers. For example, this one.

The application works great. The source code is synchronized. The hot reloading works great too.

The only problem that I have is that node_modules folder is empty on the host. Is it possible to synchronize the node_modules folder that is inside Docker container with the host?

I've already read these answers:

  1. docker-compose volume on node_modules but is empty
  2. Accessing node_modules after npm install inside Docker

Unfortunately, they didn't help me a lot. I don't like the first one, because I don't want to run npm install on my host because of the possible cross-platform issues (e.g. the host is Windows or Mac and the Docker container is Debian 8 or Ubuntu 16.04). The second one is not good for me too, because I'd like to run npm install in my Dockerfile instead of running it after the Docker container is started.

Also, I've found this blog post. The author tries to solve the same problem I am faced with. The problem is that node_modules won't be synchronized because we're just copying them from the Docker container to the host.

I'd like my node_modules inside the Docker container to be synchronized with the host. Please, take into account that I want:

  • to install node_modules automatically instead of manually
  • to install node_modules inside the Docker container instead of the host
  • to have node_modules synchronized with the host (if I install some new package inside the Docker container, it should be synchronized with the host automatically without any manual actions)

I need to have node_modules on the host, because:

  • possibility to read the source code when I need
  • the IDE needs node_modules to be installed locally so that it could have access to the devDependencies such as eslint or prettier. I don't want to install these devDependencies globally.

Thanks in advance.

daniel kullmann
  • 12,359
  • 6
  • 48
  • 63
Vladyslav Turak
  • 4,416
  • 3
  • 21
  • 25

11 Answers11

68

At first, I would like to thank David Maze and trust512 for posting their answers. Unfortunately, they didn't help me to solve my problem.

I would like to post my answer to this question.

My docker-compose.yml:

---
# Define Docker Compose version.
version: "3"

# Define all the containers.
services:
  # Frontend Container.
  frontend:
    build: ./app/frontend
    volumes:
      - ./app/frontend:/usr/src/app
    ports:
     - 3000:3000
    environment:
      NODE_ENV: development
    command: /usr/src/app/entrypoint.sh

My Dockerfile:

# Set the base image.
FROM node:10

# Create and define the node_modules's cache directory.
RUN mkdir /usr/src/cache
WORKDIR /usr/src/cache

# Install the application's dependencies into the node_modules's cache directory.
COPY package.json ./
COPY package-lock.json ./
RUN npm install

# Create and define the application's working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app

And last but not least entrypoint.sh:

#!/bin/bash

cp -r /usr/src/cache/node_modules/. /usr/src/app/node_modules/
exec npm start

The trickiest part here is to install the node_modules into the node_module's cache directory (/usr/src/cache) which is defined in our Dockerfile. After that, entrypoint.sh will move the node_modules from the cache directory (/usr/src/cache) to our application directory (/usr/src/app). Thanks to this the entire node_modules directory will appear on our host machine.

Looking at my question above I wanted:

  • to install node_modules automatically instead of manually
  • to install node_modules inside the Docker container instead of the host
  • to have node_modules synchronized with the host (if I install some new package inside the Docker container, it should be synchronized with the host automatically without any manual actions

The first thing is done: node_modules are installed automatically. The second thing is done too: node_modules are installed inside the Docker container (so, there will be no cross-platform issues). And the third thing is done too: node_modules that were installed inside the Docker container will be visible on our host machine and they will be synchronized! If we install some new package inside the Docker container, it will be synchronized with our host machine at once.

The important thing to note: truly speaking, the new package installed inside the Docker container, will appear in /usr/src/app/node_modules. As this directory is synchronized with our host machine, this new package will appear on our host machine's node_modules directory too. But the /usr/src/cache/node_modules will have the old build at this point (without this new package). Anyway, it is not a problem for us. During next docker-compose up --build (--build is required) the Docker will re-install the node_modules (because package.json was changed) and the entrypoint.sh file will move them to our /usr/src/app/node_modules.

You should take into account one more important thing. If you git pull the code from the remote repository or git checkout your-teammate-branch when Docker is running, there may be some new packages added to the package.json file. In this case, you should stop the Docker with CTRL + C and up it again with docker-compose up --build (--build is required). If your containers are running as a daemon, you should just execute docker-compose stop to stop the containers and up it again with docker-compose up --build (--build is required).

If you have any questions, please let me know in the comments.

Hope this helps.

Vladyslav Turak
  • 4,416
  • 3
  • 21
  • 25
  • 4
    This solution worked so well for me. Thanks a lot, I was searching for it for a long time. Thanks! – richardaum Mar 02 '19 at 05:37
  • 2
    Using rsync instead of cp can save time. The command runs almost instantly if you already have copied node_modules previously. For example, in my script I run: `rsync -arv /home/app/node_modules /tmp/node_modules` – Joseph Siefers Aug 27 '19 at 21:36
  • This is brilliant but how does one use this when running it in a jenkins pipeline? – Steve Tomlin Nov 20 '19 at 12:09
  • Is there any official thing for this "problem" ? I'm looking for a clean approach where leave all the work for container, and have some sync mechanism from container to host to be able to use prettier, git, eslint from local(host). – Vitor Camacho Feb 15 '20 at 16:41
  • Thanks for this! Any idea bout how to speed up the process? Using `rsync` reduces the time of following container runs, but increases the time of the first run after every image build. – lewislbr Mar 27 '20 at 14:53
  • Error: Cannot find module '/usr/src/app/entrypoint.sh', Can anyone help me? – jcarlosweb Apr 03 '20 at 22:13
  • 1
    Forget it. I had to convert the `entrypoint.sh` to unix format. – jcarlosweb Apr 04 '20 at 22:17
  • hi I tried ur solution, I got `/usr/local/bin/docker-entrypoint.sh: exec: line 8: /usr/src/app/entrypoint.sh: not found`. – saadi Jun 02 '20 at 09:29
  • 1
    Thank you this worked well for me! Small note, since I hadn't generated a `package-lock.json` yet, this generated an error during the build. Instead of having 2 separate lines in the `Dockerfile` for copying `package-lock.json` and `package.json`, it's easy to use one line with `COPY package*.json ./` which covers both or either. – m4rlo Nov 13 '20 at 22:38
  • i got this error `/usr/local/bin/docker-entrypoint.sh: 8: exec: /usr/src/app/entrypoint.sh: Permission denied` any fix? – Rolly May 08 '21 at 15:40
  • `RUN mkdir /usr/src/cache` is unnecessary since `WORKDIR /usr/src/cache` creates the desired directory – sebassebas1313 May 18 '21 at 15:15
8

Having run into this issue and finding the accepted answer pretty slow to copy all node_modules to the host in every container run, I managed to solve it by installing the dependencies in the container, mirror the host volume, and skip installing again if a node_modules folder is present:

Dockerfile:

FROM node:12-alpine

WORKDIR /usr/src/app

CMD [ -d "node_modules" ] && npm run start || npm ci && npm run start

docker-compose.yml:

version: '3.8'

services:
  service-1:
    build: ./
    volumes:
      - ./:/usr/src/app

When you need to reinstall the dependencies just delete node_modules.

lewislbr
  • 520
  • 4
  • 15
  • This is the cleanest solution I've seen yet and doesn't require extra entry-point scripts. On first run, I see that a node_modules folder gets created on my local file system. – William May 26 '20 at 19:15
  • 1
    How does this even work, the node_modules folder from the Dockerfile gets overwritten when mounting the volume. – Daniel W. Jun 12 '20 at 16:22
7

There's three things going on here:

  1. When you run docker build or docker-compose build, your Dockerfile builds a new image containing a /usr/src/app/node_modules directory and a Node installation, but nothing else. In particular, your application isn't in the built image.
  2. When you docker-compose up, the volumes: ['./app/frontend:/usr/src/app'] directive hides whatever was in /usr/src/app and mounts host system content on top of it.
  3. Then the volumes: ['frontend-node-modules:/usr/src/app/node_modules'] directive mounts the named volume on top of the node_modules tree, hiding the corresponding host system directory.

If you were to launch another container and attach the named volume to it, I expect you'd see the node_modules tree there. For what you're describing you just don't want the named volume: delete the second line from the volumes: block and the volumes: section at the end of the docker-compose.yml file.

David Maze
  • 67,477
  • 12
  • 66
  • 91
  • Thanks for your answer. I have removed those two lines. After that, I have the same issue which was described [here](https://stackoverflow.com/questions/30043872/docker-compose-node-modules-not-present-in-a-volume-after-npm-install-succeeds/). When I am running `docker-compose up`, the `node_modules` that were installed in the Docker container with `docker-compose build`, are overridden by the application's source code (`./app/frontend:/usr/src/app`). I don't have `node_modules` on the host and because of this, I'm getting an error inside the container (e.i. `node_modules` don't exist). – Vladyslav Turak Jun 29 '18 at 11:21
  • 1
    Yes, that's correct, that is in fact how it works: the mounted volume / host directory hides what was in the image. Docker won't automatically copy things from an image to a host-path volume ever. You could run `npm install` or `yarn install` on your host directory before launching the container, or change your command to something like `npm install && npm start`. – David Maze Jun 29 '18 at 12:38
  • 4
    I got it. The problem is that I wouldn't like to run `npm install` on my host machine because of the possible cross-platform issues. `node_modules` installed on the host machine don't always work properly inside the Docker container. There could be a lot of problems such as different environments, different `NPM` or `Node.js` versions (e.g. we run `npm install` on Mac with `Node.js 8.11.3` and `NPM 5.6.0` and mount the volume to the Ubuntu 16.04 Docker container with `Node.js 10.5.0` and `NPM 6.1.0`). The application won't work. As I understand, it is not possible to achieve what I want? – Vladyslav Turak Jun 29 '18 at 13:23
  • 2
    For example, [node-sass](https://github.com/sass/node-sass) is a binary and it really matters where it was compiled. If the environments differ, we will get an error. Read more [here](https://stackoverflow.com/questions/41942769/issue-to-node-sass-and-docker). – Vladyslav Turak Jun 29 '18 at 13:27
4

No one has mentioned solution with actually using docker's entrypoint feature.

Here is my working solution:

Dockerfile (multistage build, so it is both production and local dev ready):

FROM node:10.15.3 as production
WORKDIR /app

COPY package*.json ./
RUN npm install && npm install --only=dev

COPY . .

RUN npm run build

EXPOSE 3000

CMD ["npm", "start"]


FROM production as dev

COPY docker/dev-entrypoint.sh /usr/local/bin/

ENTRYPOINT ["dev-entrypoint.sh"]
CMD ["npm", "run", "watch"]

docker/dev-entrypoint.sh:

#!/bin/sh
set -e

npm install && npm install --only=dev ## Note this line, rest is copy+paste from original entrypoint

if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ]; then
  set -- node "$@"
fi

exec "$@"

docker-compose.yml:

version: "3.7"

services:
    web:
        build:
            target: dev
            context: .
        volumes:
            - .:/app:delegated
        ports:
            - "3000:3000"
        restart: always
        environment:
            NODE_ENV: dev

With this approach you achieve all 3 points you required and imho it is much cleaner way - not need to move files around.

Jan Mikeš
  • 41
  • 3
  • Could you please clarify why this works? Specifically, I am unsure how it makes the `node_modules` directory available to the host through the mounted volume. – Sung Cho Nov 01 '19 at 06:30
  • Sure! The thing is that ENTRYPOINT part (the command) in terms of container's lifecycle is called AFTER volume is mounted on already running container, so any changes made to the filesystem done in it will be reflected to the host - in fact `node_modules` directory at this point is host's directory and not the one built during `docker build` directly on the image. – Jan Mikeš Nov 20 '19 at 09:23
  • This approach doesn't worked for me. When resolving packages with yarn, it stuck for long time. – Vitor Camacho Feb 17 '20 at 10:21
  • This worked and out of all the other answers, this one is preferable for all the reasons the author said. – Rudiger Mar 16 '21 at 01:49
  • I believe this is functionally the same as what @lewislbr did. ENTRYPOINT and CMD happen at the same stage (container built, volume mounted) and the action is the same: `npm install`. – Chris Apr 16 '21 at 19:25
3

Binding your host node_modules folder with your container node_modules is not a good practice as you mention. I have seen the solution of creating an internal volume for this folder quite often. Not doing so will cause problems during the building stage.

I ran into this problem when I was trying to build a docker development environment for an angular app, that shows tslib errors when I was editing the files within my host folder cause my host's node_modules folder was empty (as expected).

The cheap solution that helps me, in this case, was to use the Visual Studio Code Extension called "Remote-Containers".

This extension will allow you to attach your Visual Studio Code to your container and edit transparently your files within your container folders. To do so, it will install an internal vscode server within your development container. For more information check this link.

Ensure, however, that your volumes are still created in your docker-compose.yml file.

I hope it helps :D!

CristoJV
  • 353
  • 4
  • 11
2

I wouldn't suggest overlapping volumes, although I haven't seen any official docs ban it, I've had some issues with it in the past. How I do it is:

  1. Get rid of the external volume as you are not planning on actually using it how it's meant to be used - respawning the container with its data created specifically in the container after stopping+removing it.

The above might be achieved by shortening your compose file a bit:

frontend:
  build: ./app/frontend
  volumes:
    - ./app/frontend:/usr/src/app
  ports:
    - 3000:3000
  environment:
    NODE_ENV: ${ENV}
  command: npm start
  1. Avoid overlapping volume data with Dockerfile instructions when not necessary.

That means you might need two Dockerfiles - one for local development and one for deploying a fat image with all the application dist files layered inside.

That said, consider a development Dockerfile:

FROM node:10
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install

The above makes the application create a full node_modules installation and map it to your host location, while the docker-compose specified command would start your application off.

trust512
  • 1,588
  • 1
  • 14
  • 14
  • Thanks for the answer. It seems like I will have the same problem as I've described [here](https://stackoverflow.com/questions/51097652/install-node-modules-inside-docker-container-and-synchronize-them-with-host#comment89188358_51099162), won't I? – Vladyslav Turak Jun 29 '18 at 11:27
1

Thanks Vladyslav Turak for answer with entrypoint.sh where we copy node_modules from container to host.

I implemented the similar thing but I run into the issue with husky, @commitlint, tslint npm packages.
I can't push anything into repository.
Reason: I copied node_modules from Linux to Windows. In my case <5% of files are different (.bin and most of package.json) and 95% are the same. example: image with diff

So I returned to solution with npm install of node_modules for Windows first (for IDE and debugging). And Docker image will contain Linux version of node_modules.

1

I know that this was resolved, but what about:

Dockerfile:

FROM node

# Create app directory
WORKDIR /usr/src/app

# Your other staffs

EXPOSE 3000

docker-composer.yml:

version: '3.2'
services:
    api:
        build: ./path/to/folder/with/a/dockerfile
        volumes:
            - "./volumes/app:/usr/src/app"
        command: "npm start"

volumes/app/package.json

{
    ... ,
    "scripts": {
        "start": "npm install && node server.js"
    },
    "dependencies": {
        ....
    }
 }

After run, node_modules will be present in your volumes, but its contents are generated within the container so no cross platform problems.

Ignacio
  • 91
  • 8
  • @Vladyslav Turak can you please clear my confusion for entrypoint.sh which one is container path and system path for copying node_module dir – Muhammad Hamza Younas Mar 03 '19 at 07:43
  • Any idea about avoiding reinstalling on every container run? – lewislbr Mar 27 '20 at 15:16
  • @lewislbr there is no harm with running "npm install" on every container run. It won't re-install anything already installed because the node_modules will persist between runs. – Ignacio Dec 09 '20 at 16:32
1

I'm not sure to understand why you want your source code to live inside the container and host and bind mount each others during development. Usually, you want your source code to live inside the container for deployments, not development since the code is available on your host and bind mounted.

Your docker-compose.yml

frontend:
  volumes:
    - ./app/frontend:/usr/src/app

Your Dockerfile

FROM node:10

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

Of course you must run npm install first time and everytime package.json changes, but you run it inside the container so there is no cross-platform issue: docker-compose exec frontend npm install

Finally start your server docker-compose exec frontend npm start

And then later, usually in a CI pipeline targetting a deployment, you build your final image with the whole source code copied and node_modules reinstalled, but of course at this point you don't need anymore the bind mount and "synchronization", so your setup could look like :

docker-compose.yml

frontend:
  build:
    context: ./app/frontend
    target: dev
  volumes:
    - ./app/frontend:/usr/src/app

Dockerfile

FROM node:10 as dev

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

FROM dev as build

COPY package.json package-lock.json ./
RUN npm install

COPY . ./

CMD ["npm", "start"]

And you target the build stage of your Dockerfile later, either manually or during a pipeline, to build your deployment-ready image.

I know it's not the exact answer to your questions since you have to run npm install and nothing lives inside the container during development, but it solves your node_modules issue, and I feel like your questions are mixing development and deployment considerations, so maybe you thought about this problem in the wrong way.

1

A Simple, Complete Solution

You can install node_modules in the container using the external named volume trick and synchronize it with the host by configuring the volume's storage location to point to your host's node_modules directory. This can be done with a named volume using the local driver and a bind mount, as seen in the example below.

The volume's data is stored on your host anyway, in something like /var/lib/docker/volumes/, so we're just storing it inside your project instead.

To do this in Docker Compose, just add your node_modules volume to your front-end service, and then configure the volume in the named volumes section, where "device" is the relative path (from the location of docker-compose.yml) to your local (host) node_modules directory.

docker-compose.yml

version: '3.9'

services:
  ui:
    # Your service options...
    volumes:
      - node_modules:/path/to/node_modules

volumes:
  node_modules:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./local/path/to/node_modules

The key with this solution is to never make changes directly in your host node_modules, but always install, update, or remove Node packages in the container.

Documentation:

JeremyM4n
  • 335
  • 2
  • 8
  • This solution looks cool. Could you please explain the volumes part ? Thank you! – madhan May 04 '21 at 03:59
  • @madhan in volumes it's creating a named volume called node_modules using the local driver with a bind mount, and the most important part, the "device" is the location where the volume data is stored. You can read more about volumes and bind mounts using the links above under Documentation. – JeremyM4n May 06 '21 at 16:30
  • Thank you and is it possible to create a folder at run time if its does not exists in `device` ? – madhan May 07 '21 at 09:21
  • @madhan Docker will create the node_modules folder if it doesn't exist, but the rest of the path should be the relative path (from the location of docker-compose.yml) to the root of your node based project. In my case it's ./src/ui/node_modules. – JeremyM4n May 07 '21 at 17:01
  • Thanks Jeremy. In case my it is at root level `./node_modules` and docker didn't create it automatically. I have create it manually and rest all goes smooth. I'll try to find a way. – madhan May 09 '21 at 10:06
0

My workaround is to install dependencies when the container is starting instead of during build-time.

Dockerfile:

# We're using a multi-stage build so that we can install dependencies during build-time only for production.

# dev-stage
FROM node:14-alpine AS dev-stage
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
# `yarn install` will run every time we start the container. We're using yarn because it's much faster than npm when there's nothing new to install
CMD ["sh", "-c", "yarn install && yarn run start"]

# production-stage
FROM node:14-alpine AS production-stage
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn install
COPY . .

.dockerignore

Add node_modules to .dockerignore to prevent it from being copied when the Dockerfile runs COPY . .. We use volumes to bring in node_modules.

**/node_modules

docker-compose.yml

node_app:
    container_name: node_app
    build:
        context: ./node_app
        target: dev-stage # `production-stage` for production
    volumes:
        # For development:
        #   If node_modules already exists on the host, they will be copied
        #   into the container here. Since `yarn install` runs after the
        #   container starts, this volume won't override the node_modules.
        - ./node_app:/usr/src/app
        # For production:
        #   
        - ./node_app:/usr/src/app
        - /usr/src/app/node_modules
Kasper
  • 75
  • 1
  • 7