11

Your AWS Elastic Beanstalk deployment fails: - Intermittent - For no real apparent reason

Step 1: Check obvious log

/var/log/eb-activity.log

  Running npm install:  /opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm
  Setting npm config jobs to 1
  npm config jobs set to 1
  Running npm with --production flag
  Failed to run npm install. Snapshot logs for more details.
  Traceback (most recent call last):
    File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 695, in <module>
      main()
    File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 677, in main
      node_version_manager.run_npm_install(options.app_path)
    File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 136, in run_npm_install
      self.npm_install(bin_path, self.config_manager.get_container_config('app_staging_dir'))
    File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 180, in npm_install
      raise e
  subprocess.CalledProcessError: Command '['/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm', '--production', 'install']' returned non-zero exit status 1 (ElasticBeanstalk::ExternalInvocationError)
caused by: + /opt/elasticbeanstalk/containerfiles/ebnode.py --action npm-install

Step 2: Google for appropriate Snapshot log file...

/var/log/nodejs/npm-debug.log

58089 verbose stack Error: spawn ENOMEM
58089 verbose stack     at exports._errnoException (util.js:1022:11)
58089 verbose stack     at ChildProcess.spawn (internal/child_process.js:313:11)
58089 verbose stack     at exports.spawn (child_process.js:380:9)
58089 verbose stack     at spawn (/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/spawn.js:21:13)
58089 verbose stack     at runCmd_ (/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/lifecycle.js:247:14)
58089 verbose stack     at /opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/lifecycle.js:211:7
58089 verbose stack     at _combinedTickCallback (internal/process/next_tick.js:67:7)
58089 verbose stack     at process._tickCallback (internal/process/next_tick.js:98:9)
58090 verbose cwd /tmp/deployment/application
58091 error Linux 4.4.44-39.55.amzn1.x86_64
58092 error argv "/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/node" "/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm" "--production" "install"
58093 error node v6.10.0
58094 error npm  v3.10.10
58095 error code ENOMEM
58096 error errno ENOMEM
58097 error syscall spawn
58098 error spawn ENOMEM

Step 3: Obvious options...

  • Use a bigger instance and it works...

  • Don't fix, just try again

    • Deploy again and it works...

    • Clone the environment and it works...

    • Rebuild the environment and it works....

  • Are left feeling dirty and wrong

rmharrison
  • 3,100
  • 1
  • 18
  • 30

2 Answers2

15

TL;DR

Your instances (t2.micro in my case) are running out of memory because the instance spin-up is parallelised.

Hack resolution: Provision SWAP space on instance and retry

For one-off, while logged into instance...

sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo /sbin/mkswap /var/swap.1
sudo chmod 600 /var/swap.1
sudo /sbin/swapon /var/swap.1

From / more detail: How do you add swap to an EC2 instance?

During deployment we use a bit of SWAP, but no crash

Mem:   1019116k total,   840880k used,   178236k free,    15064k buffers
Swap:  1048572k total,    12540k used,  1036032k free,    62440k cached

Actual resolutions

Bigger instances

  • While storage can be scaled via EBS, instances come with fixed CPU and RAM, AWS source.
  • Cost money, and these are just dev instances where mem is only a problem during spin-up

Automate provisioning of swap in ElasticBeanStalk

  • Probably .ebextensions/
  • Open Question: Cloud formation-style or a hook on deploy / restart?

Hop on the 'server-less' bandwagon

  • The promise of API Gateway + Lambda + Friends is that we shouldn't have to deal with this ish.
  • Are you 'tall enough' for cloud-native microservices? Are they even appropriate to your problem, when something staid/unfashionable like SOA would suffice.
  • Once going cloud-first, reverting to on-prem would be difficult, which is a requirement for some.

Use less bloated packages

  • Sometimes you're stuck with legacy
  • Can be caused by necessary transitive- or sub-dependencies. Where does it end...decomposing other people's libraries?

Explanation

A quick google reveals that ENOMEM is an out of memory error. t2.micro instances only have 1 GB of RAM.

Rarely would we use this amount on dev; however, ElasticBeanstalk parallelizes parts of the build process through spawned workers. This means that during SETUP, for the larger packages, one may run out of memory and the operation will fail.

Using free -m we can see...

Start (plenty of free memory)

             total       used       free     shared    buffers     cached
Mem:       1019116     609672     409444        144      45448     240064
-/+ buffers/cache:     324160     694956
Swap:            0          0          0

Ran out of memory at next tick)

Mem:       1019116     947232      71884        144      11544      81280
-/+ buffers/cache:     854408     164708
Swap:            0          0          0

Deploy process aborted

             total       used       free     shared    buffers     cached
Mem:       1019116     411892     607224        144      13000      95460
-/+ buffers/cache:     303432     715684
Swap:            0          0          0
Community
  • 1
  • 1
rmharrison
  • 3,100
  • 1
  • 18
  • 30
  • This happened to me when npm itself was only using 300mb. – user Aug 13 '17 at 05:22
  • 3
    Suggestion to Stack Overflow: let people like this show their bitcoin wallet address so I can dap them up when they save me time and money. – user Aug 13 '17 at 05:23
  • I created a gist with the .ebextensions file I am using (seems to be working): https://gist.github.com/nsacerdote/8e2fae3e1dd936d6a6d6d906b92b7460 – nsacerdote Apr 01 '19 at 10:18
1

Rarely would we use this amount on dev; however, ElasticBeanstalk parallelizes parts of the build process through spawned workers. This means that during SETUP, for the larger packages, one may run out of memory and the operation will fail.

That's exactly what was happening with me! My node.js server worked fine on my dev ec2 t2-micro, but when i deployed a staging enviorment on elastic beanstalk (also with a t2-micro) this error appeared, change the eb instance to t2-small does the trick.

Ricardo Mutti
  • 1,719
  • 1
  • 15
  • 16
  • 1
    May be resolved by version bumping to npm v5: "Downloads for large packages are streamed in and out of disk. npm is now able to install packages of *any* size without running out of memory. Support for publishing them is pending (due to registry limitations)." Source: http://blog.npmjs.org/post/161081169345/v500 – rmharrison May 31 '17 at 18:28
  • @rmharrison maybe you should post your comment rather as a possible answer? To me it seems as the simplest solution (though it adds extra delay to the deploy process) – Jakub Holý Aug 30 '17 at 10:53
  • 1
    FYI: I have upgraded to npm@5 but needed also to set `unsafe-perm=true` in `.npmrc` to prevent EACCES errors during `node-gyp` runs, see https://stackoverflow.com/questions/46001516/beanstalk-node-js-deployment-node-gyp-fails-due-to-permission-denied/46001517#46001517 – Jakub Holý Sep 01 '17 at 13:46
  • 1
    @rmharrison Unfortunately, npm v5.4.2 didn't help – André Werlang Dec 19 '17 at 14:58