Tag Archives: distribution automation

Self-contained Node.js Deployment

While setting up a Node.js environment on an individual developer’s machine can be done in a casual manner and oftentimes can be tailored to the developer’s own taste, deploying Node.js applications on shared or production servers requires a little more planning in advance.

To install Node.js on a server, a straight forward approach is to just follow some quick-start instructions from an official source. For instance, assuming latest v.4.x of Node.js is the target version and CentOS Linux is the OS on the target server, the installation can be as simple as follows:

# Install EPEL (Extra Packages for Enterprise Linux)
sudo yum install epel-release

# Run Node.js pre-installation setup
curl -sL https://rpm.nodesource.com/setup_4.x | bash -

# Install Node.js
sudo yum install -y nodejs

For Ubuntu:

# Install Node.js on Ubuntu
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Software version: Latest versus Same

However, the above installation option leaves the version of the installed Node.js out of your own control. Although the major release would stick to v.4, the latest update to Node available at the time of the command execution will be installed.

There are debates about always-getting-the-latest versus keeping-the-same-version when it comes to software installation. My take is that on individual developer’s machine, you’re at liberty to go for ‘latest’ or ‘same’ to suit your own need (for exploring experimental features versus getting ready for production support). But on servers for staging, QA, or production, I would stick to ‘same’.

Some advocates of ‘latest’ even for production servers argue that not doing so could compromise security on the servers. It’s a valid concern but stability is also a critical factor. My recommendation is to keep version on critical servers consistent while making version update for security a separate and independently duty, preferably handled by a separate operations staff.

Onto keeping a fixed Node.js version

As of this writing, the latest LTS (long-term-support) release of Node.js is v.4.4.7. The next LTS (v.6.x) is scheduled to be out in the next quarter of the year. There are a couple of options. Again, let’s assume we’re on CentOS, and that it’s CentOS 7 64-bit. There are a couple of options.

Option 1: Build from source

# Install Node.js v4.4.7 - Build from source
mkdir ~/nodejs
cd ~/nodejs

curl http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz | tar xz --strip-components=1

./configure --prefix=/usr/local

sudo make install

As a side note, if you’re on CentOS 6 or older, you’ll need to update gcc and Python.

Option 2: Use pre-built binary

# Install Node.js v4.4.7 - Linux binary (64bit)
mkdir ~/nodejs
cd ~/nodejs
curl http://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.gz | tar xz --strip-components=1

# Install Node under /opt
mkdir ~/nodejs/etc
echo 'prefix=/usr/local' > nodejs/etc/npmrc
sudo mv nodejs /opt/
sudo chown -R root:root /opt/nodejs

# Create soft links in standard search path
sudo ln -s /opt/nodejs/bin/node /usr/local/bin/node
sudo ln -s /opt/nodejs/bin/npm /usr/local/bin/npm

Note that both the above two options install a system-wide Node.js (which comes with the default package manager NPM) accessible to all legitimate users on the server host.

Node process manager

Next, install a process manager to manage processes of the Node app, providing features such as auto-restart. Two of the most prominent ones are forever and pm2. Let’s go with the slightly more robust one, pm2. Check for the latest version from the pm2 website and specify it in the npm install command:

# Install global pm2 v1.1.3
sudo npm install -g pm2@1.1.3

# Verify installed pm2
cd /usr/local/lib
npm list | grep pm2

Deploying self-contained Node.js

Depending on specific deployment requirements, one might prefer having Node confined to a local file structure that belongs to a designated user on the server host. Contrary to having a system-wide Node.js, this approach would equip each of your Node projects with its own Node.js binary and modules.

Docker, as briefly touched on in a previous blog, would be a good tool in such use case, but one can also handle it without introducing an OS-level virtualization layer. Here’s how Node.js can be installed underneath a local Node.js project directory:

# Project directory of your Node.js app
PROJDIR="/path/to/MyNodeApp"

# Install local Node.js v4.4.7 Linux binary (64bit)
mkdir $PROJDIR/nodejs
cd $PROJDIR/nodejs
curl http://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.gz | tar xz --strip-components=1

# Install local pm2 v1.1.3
# pm2 will be installed under $PROJDIR/nodejs/lib/node_modules/pm2/bin/
cd $PROJDIR/nodejs/lib
sudo $PROJDIR/nodejs/bin/npm install pm2@1.1.3
$PROJDIR/nodejs/bin/npm list | grep pm2

Next, create simple scripts to start/stop the local Node.js app (assuming main Node app is app.js):

Script: $PROJDIR/bin/njsenv.sh (sourced by start/stop scripts)

# $PROJDIR/bin/njsenv.sh

#!/bin/bash

ENVSCRIPT="$0"

# Get absolute filepath of this setenv script
ENVBINPATH="$( cd "$( dirname "$ENVSCRIPT" )" && pwd )"

# Get absolute filepath of the Nodejs project
PROJPATH="$( cd "$ENVBINPATH" && cd ".." && pwd )"

# Get absolute filepath of the Nodejs bin
NJSBINPATH="$( cd "$PROJPATH" && cd nodejs/bin && pwd )"

# Get absolute filepath of the process manager
PMGRPATH=${PROJPATH}/nodejs/lib/node_modules/pm2/bin

# Function for prepending a path segment that is not yet in PATH
pathprepend() {
    for ARG in "$@"
    do
        if [ -d "$ARG" ] && [[ ":$PATH:" != *":$ARG:"* ]]; then
            PATH="$ARG${PATH:+":$PATH"}"
        fi
    done
}

pathprepend "$PMGRPATH" "$NJSBINPATH"

echo "PATH: $PATH"

Script: $PROJDIR/bin/start.sh

#!/bin/bash

SCRIPT="$0"

# Get absolute filepath of this script
BINPATH="$( cd "$( dirname "$SCRIPT" )" && pwd )"

if [ ! -f "${BINPATH}/njsenv.sh" ]
then
    echo "${BINPATH}/njsenv.sh cannot be found! Aborting ..."
    exit 0
fi

# Set env for PATH and project/app:
#   PATH       = Linux path
#   PROJPATH   = Nodejs project
#   NJSBINPATH = Nodejs bin
#   PMGRPATH   = pm2 path
source ${BINPATH}/njsenv.sh

NODEAPP="main.js"

PMGR=${PMGRPATH}/pm2

echo "Starting $NODEAPP at $PROJPATH ..."

CMD="cd $PROJPATH && $PMGR start $NODEAPP"

# Start Nodejs main app
eval $CMD

echo "Command executed: $CMD"

Script: $PROJDIR/bin/stop.sh

#!/bin/bash

SCRIPT="$0"

# Get absolute filepath of this script
BINPATH="$( cd "$( dirname "$SCRIPT" )" && pwd )"

if [ ! -f "${BINPATH}/njsenv.sh" ]
then
    echo "${BINPATH}/njsenv.sh cannot be found! Aborting ..."
    exit 0
fi

# Set env for PATH and project/app:
#   PATH       = Linux path
#   PROJPATH   = Nodejs project
#   NJSBINPATH = Nodejs bin
#   PMGRPATH   = pm2 path
source ${BINPATH}/njsenv.sh

# Set PATH env
source ${BINPATH}/njsenv.sh

PMGR=${PMGRPATH}/pm2

echo "Stopping all Node.js processes ..."

CMD="cd $PROJPATH && $PMGR stop all"

# Stop all Nodejs processes
eval $CMD

echo "Command executed: $CMD"

It would make sense to organize such scripts in, say, a top-level bin/ subdirectory. Along with the typical file structure of your Node app such as controllers, routes, configurations, etc, your Node.js project directory might now look like the following:

$PROJDIR/
    app.js
    bin/
      njsenv.sh
      start.sh
      stop.sh
    config/
    controllers/
    log/
    models/
    nodejs/
        bin/
        lib/
            node_modules/
    node_modules/
    package.json
    public/
    routes/
    views/

Packaging/Bundling your Node.js app

Now that the key Node.js software modules are in place all within a local $PROJDIR subdirectory, next in line is to shift the focus to your own Node app and create some simple scripts for bundling the app.

This blog post is aimed to cover relatively simple deployment cases in which there isn’t need for environment-specific code build. Should such need arise, chances are that you might already be using a build automation tool such as gulp, which was heavily used by a Node app in a recent startup I cofounded. In addition, if the deployment requirements are complex enough, configuration management/automation tools like Puppet, SaltStack or Chef might also be used.

For simple Node.js deployment that the app modules can be pre-built prior to deployment, one can simply come up with simple scripts to pre-package the app in a tar ball, which then gets expanded in the target server environments.

To better manage files for the packaging/bundling task, it’s a good practice to maintain a list of files/directories to be included in a text file, say, include.files. For instance, if there is no need for environment-specific code build, package.json doesn’t need to be included when packaging in the QA/production environment. While at it, keep also a file, exclude.files that list all the files/directories to be excluded. For example:

# File include.files:
app.js
config
controllers
models
node_modules
nodejs
public
routes
views

# File exclude.files:
.DS_Store
.git

Below is a simple shell script which does the packaging/bundling of a localized Node.js project:

#!/bin/bash

# Project directory of your Node.js app
PROJDIR="/path/to/MyNodeApp"

# Extract package name and version
NAME=`grep -o '"name"[ \t]*:[ \t]*"[^"]*"' $PROJDIR/package.json | sed -n 's/.*:[ \t]*"\([^"]*\)"/\1/p'`
VERSION=`grep -o '"version"[ \t]*:[ \t]*"[^"]*"' $PROJDIR/package.json | sed -n 's/.*:[ \t]*"\([^"]*\)"/\1/p'`

if [ "$NAME" = "" ]  || [ "$VERSION" = "" ]; then
  echo "ERROR: Package name or version not found! Exiting ..."
  exit 1
fi

# Copy files/directories based on 'files.include' to the bundle subdirectory
cd $PROJDIR

# Create/Recreate bundle subdirectory
rm -rf bundle
mkdir bundle
mkdir bundle/$NAME

for file in `cat include.files`; do cp -rp "$file" bundle/$NAME ; done

# Tar-gz content excluding files/directories based on 'exclude.files'
cd bundle
tar --exclude-from=../exclude.files -czf $NAME-$VERSION.tar.gz $NAME
if [ $? -eq 0 ]; then
  echo "Bundle created under $PROJDIR/bundle: $NAME-$VERSION.tar.gz
else
  echo "ERROR: Bundling failed!"
fi

rm -rf $NAME

Run bundling scripts from within package.json

An alternative to doing the packaging/bundling with external scripts is to make use of npm’s features. The popular Node package manager comes with file exclusion rules based on files listed in .npmignore and .gitignore. It also comes with scripting capability that to handle much of what’s just described. For example, one could define custom file inclusion variable within package.json and executable scripts to do the packaging/bundling using the variables in the form of $npm_package_{var} like the following:

  "name": "mynodeapp",
  "version": "1.0.0",
  "main": "app.js",
  "description": "My Node.js App",
  "author": "Leo C.",
  "license": "ISC",
  "dependencies": {
    "config": "~1.21.0",
    "connect-redis": "~3.1.0",
    "express": "~4.14.0",
    "gulp": "~3.9.1",
    "gulp-mocha": "~2.2.0",
    "helmet": "~2.1.1",
    "lodash": "~4.13.1",
    "mocha": "~2.5.3",
    "passport": "~0.3.2",
    "passport-local": "~1.0.0",
    "pg": "^6.0.3",
    "pg-native": "^1.10.0",
    "q": "~1.4.1",
    "redis": "^2.6.2",
    "requirejs": "~2.2.0",
    "swig": "~1.4.2",
    "winston": "~2.2.0",
  },
  "bundleinclude": "app.js config/ controllers/ models/ node_modules/ nodejs/ public/ routes/ views/",
  "scripts": {
    "bundle": "rm -rf bundle && mkdir bundle && mkdir bundle/$npm_package_name && cp -rp $npm_package_bundleinclude bundle/$npm_package_name && cd bundle && tar --exclude-from=../.npmignore -czf $npm_package_name-$npm_package_version.tgz $npm_package_name && rm -rf $npm_package_name"
  }
}

Here comes another side note: In the dependencies section, a version with prefix ~ qualifies any version with patch-level update (e.g. ~1.2.3 allows any 1.2.x update), whereas prefix ^ qualifies minor-level update (e.g. ^1.2.3 allows any 1.x.y update).

To deploy the Node app on a server host, simply scp the bundled tar ball to the designated user on the host (e.g. scp $NAME-$VERSION.tgz njsapp@:package/) use a simple script similar to the following to extract the bundled tar ball on the host and start/stop the Node app:

#!/bin/bash

if [ $# -ne 2 ]
then
    echo "Usage: $0  "
    echo "  e.g. $0 mynodeapp ~/package/mynodeapp-1.0.0.tar.gz"
    exit 0
fi

APPNAME="$1"
PACKAGE="$2"

# Deployment location of your Node.js app
DEPLOYDIR="/path/to/DeployDirectory"

cd $DEPLOYDIR
tar -xzf $PACKAGE

if [ $? -ne 0 ]; then
  echo "Package $PACKAGE extracted under $DEPLOYDIR"
else
  echo "ERROR: Failed to extract $PACKAGE! Exiting ..."
  exit 1
fi

# Start Node app
$DEPLOYDIR/$APPNAME/bin/start.sh

Deployment requirements can be very different for individual engineering operations. All that has been suggested should be taken as simplified use cases. The main objective is to come up with a self-contained Node.js application so that the developers can autonomously package their code with version-consistent Node binary and dependencies. A big advantage of such approach is separation of concern, so that the OPS team does not need to worry about Node installation and versioning.

A Brief Encounter With Docker

Docker, an application-container distribution automation software using Linux-based virtualization, has gained a lot of momentum since it was released in 2013. I never had a chance to try it out but a current project has prompted me to bubble it up my ever-growing To-Do list. Below is a re-cap of my first two hours of experimenting with Docker.

First thing first, get a quick grasp of Docker’s basics. I was going to test it on a MacBook and decided to go for its beta version of Docker for Mac. It’s essentially a native app version of Docker Toolbox with a little trade-off of being limited to a single VM, which can be overcome if one uses it along side with Docker ToolBox. The key differences between the two apps are nicely illustrated at Docker’s website.

Downloading and installing Docker for Mac was straight forward. Below is some configuration info about the installed software:

# Show info
docker info
...
Server Version: 1.12.0-rc2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 87
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge overlay
Swarm: inactive
Runtimes: default
Default Runtime: default
Security Options: seccomp
Kernel Version: 4.4.13-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
...

# Show version
docker version
Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true
Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Fri Jun 17 22:09:20 2016
 OS/Arch:      linux/amd64
 Experimental: true

Next, it’s almost illegal not to run something by the name of hello-world when installing a new software. While at it, test run a couple of less trivial apps to get a feel of running Docker-based apps, including an Nginx and a Ubuntu Bash shell.

# Run hello-world
docker run hello-world

# Run Nginx in background exposing port 80
docker run -d -p 80:80 --name webserver nginx

# Run Ubuntu Bash interactively on a tty terminal
docker run -it ubuntu bash

# Show all containers
docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS                         NAMES
ee8917ebea6e        ubuntu              "bash"                   48 seconds ago      Exited (0) 24 seconds ago                                 stoic_payne
795a36227667        nginx               "nginx -g 'daemon off"   24 minutes ago      Up 24 minutes               0.0.0.0:80->80/tcp, 443/tcp   webserver
89de18ed6afd        hello-world         "/hello"                 41 minutes ago      Exited (0) 41 minutes ago                                 silly_fermi

While running hello-world or Ubuntu shell is a one-time deal (e.g. the Ubuntu shell once exit is gone), the -d (for detach) run command option for Nginx would leave the server running in the background. To identify all actively running Docker containers and stop them, below is one quick way:

# Show all running containers
docker ps -q
795a36227667

# Stop all running containers
docker stop $(docker ps -q)

# Show all images
docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              693bce725149        2 weeks ago         967 B
nginx               latest              0d409d33b27e        2 weeks ago         182.8 MB
ubuntu              latest              2fa927b5cdd3        3 weeks ago         122 MB

It’s also almost illegal to let any hello-world apps sitting around forever, so it’s a perfect candidate for testing image removal. You’ll have to remove all associated containers before removing the image. Here’s one option:

# Remove all containers associated with hello-world
docker ps -a | grep hello-world | awk '{print $1}' | xargs docker rm

# Remove image hello-world
docker rmi hello-world

Note that the above method only remove those containers with description matching the image name. In case an associated container lacking the matching name, you’ll need to remove it manually (docker rm ).

Adapted from Linux’s Cowsay game, Docker provides a Whalesay game and illustrates how to combine it with another Linux game Fortune to create a custom image. This requires composing the DockerFile with proper instructions to create the image as shown below:

# Download and run Whalesay
docker run docker/whalesay cowsay boo

# Create a subdirectory for image building
cd ~/projects/docker/
mkdir dockerbuild
cd dockerbuild/

# Create Dockerfile
vi Dockerfile
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
:wq

# Build Docker image fortune-whalesay
docker build -t fortune-whalesay .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM docker/whalesay:latest
 ---> 6b362a9f73eb
Step 2 : RUN apt-get -y update && apt-get install -y fortunes
 ---> Running in 6f41bac87627
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://archive.ubuntu.com trusty-security InRelease [65.9 kB]
Hit http://archive.ubuntu.com trusty Release.gpg
Hit http://archive.ubuntu.com trusty Release
...
Setting up librecode0:amd64 (3.6-21) ...
Setting up fortune-mod (1:1.99.1-7) ...
Setting up fortunes-min (1:1.99.1-7) ...
Setting up fortunes (1:1.99.1-7) ...
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
 ---> 723b1dc99873
Removing intermediate container 6f41bac87627
Step 3 : CMD /usr/games/fortune -a | cowsay
 ---> Running in f0edad30afa2
 ---> a6bd063431a4
Removing intermediate container f0edad30afa2
Successfully built a6bd063431a4

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
fortune-whalesay    latest              a6bd063431a4        36 minutes ago      274.6 MB
nginx               latest              0d409d33b27e        2 weeks ago         182.8 MB
ubuntu              latest              2fa927b5cdd3        3 weeks ago         122 MB
docker/whalesay     latest              6b362a9f73eb        13 months ago       247 MB

docker run fortune-whalesay
/ A pound of salt will not sweeten a \
\ single cup of tea.                 /
 ------------------------------------
...

Next, to manage your Docker images in the cloud, sign up for an account at Docker Hub. Similar to GitHub, Docker Hub allows you to maintain public image repos for free. To push Docker images to your Docker Hub account, you’ll need to name your images with namespace matching your user account’s. The easiest way would be to have the prefix of your image name match your account name.

For instance, to push the fortune-whalesay image to Docker Hub with account name leocc, rename it to leocc/fortune-whalesay:

# Rename an image (first add a tag then remove old tag with rmi)
docker tag fortune-whalesay leocc/fortune-whalesay
docker rmi fortune-whalesay

# Push image to Docker Hub
docker login -u leocc
docker push leocc/fortune-whalesay

Finally, it’s time to try actually dockerize an app of my own and push it to Docker Hub. A Java app of a simple NIO-based Reactor server is being used here:

# Dockerize Java app NIO-Reactor

# Designate a build subdirectory and copy Java source code
cd ~/projects/docker/dockerbuild/
mkdir reactor
cp -p ~/projects/jstuff/reactor/src/reactor/*.java reactor/

# Create .dockerignore
vi .dockerignore
.DS_Store
.DS_Store?
._*
:wq

# Create Dockerfile
vi Dockerfile
FROM java:8
COPY . /tmp
WORKDIR /tmp
EXPOSE 9090
RUN javac reactor/*.java
RUN jar -cf reactor/nioreactor.jar reactor/*.class
CMD java -classpath reactor/nioreactor.jar reactor.Reactor
:wq

# Build image
docker build -t leocc/nioreactor .

# Test run Dockerized app
docker run -t -p 9090:9090 leocc/nioreactor

# Test the NIO Reactor server from a telnet client
telnet localhost 9090

# Push image to Docker Hub
docker login -u leocc
docker push leocc/nioreactor

The Dockerized Java app is now at Docker Hub. Now that it’s in the cloud, you may remove the the local image and associated containers as described earlier. When you want to download and run it later, simply issue the docker run command.

My brief experience of exploring Docker’s basics has been positive. If you’re familiar with Linux and GitHub, picking up the commands for various tasks in Docker comes natural. As to the native Docker for Mac app, even though it’s still in beta it executes every command reliably as advertised.