Tag Archives: node.js

React Redux – Actions & Reducers

Having immersed in coding JavaScript exclusively using Node.js and React over the past couple of months, I’ve come to appreciate the versatility and robustness the “combo” has to offer. I’ve always liked the minimalist design of Node.js, and would always consider it a top candidate whenever building an app/API server is needed. Besides ordinary app servers, Node has also been picked on a few occasions to serve as servers for decentralized applications (dApps) that involve smart contract deployments to public blockchains. In fact, Node and React are also a popular tech stack for dApp frameworks such as Scaffold-ETH.

React & React Redux

React is relatively new to me, though it’s rather easy to pick up the basics from React‘s official site. And many tutorials out there showcase how to build applications using React along with the feature-rich toolset within the React ecosystem. For instance, this tutorial code repo offers helpful insight for developing a React application with basic CRUD.

React can be complemented with Redux that allows a central store for state update in the UI components. Contrary to the local state maintained within an React component (oftentimes used for handling interactive state changes to input form elements), the central store can be shared across multiple components for state update. That’s a key feature useful for the R&D project at hand.

Rather than just providing a plain global state repository for direct access, the React store is by design “decoupled” from the components. React Redux allows custom programmatic actions to be structured by user-defined action types. To dispatch an action, a component would invoke a dispatch() function which is the only mechanism that triggers a state change.

React actions & reducers

In general, a React action which is oftentimes dispatched in response to an UI event (e.g. a click on a button) mainly does two things:

  1. It carries out the defined action which is oftentimes an asynchronous function that invokes a user-defined React service which, for instance, might be a client HTTP call to a Node.js server.
  2. It connects with the Redux store and gets funneled into a reduction process. The reduction is performed thru a user-defined reducer which is typically a state aggregation of the corresponding action type.

An action might look something like below:

const myAction = () => async (dispatch) => {
  try {
    const res = await myService.someFunction();
    dispatch({
      type: someActionType,
      payload: res.data,
    });
  } catch (err) {
    ...
  }
};

whereas a reducer generally has the following function signature:

const myReducer = (currState = prevState, action) => {
  const { type, payload } = action;
  switch (type) {
    case someActionType:
      return someFormOfPayload;
    case anotherActionType:
      return anotherFormOfPayload;
    ...
    default:
      return currState;
  }
};

Example of a React action

${react-project-root}/src/actions/user.js

import {
  CREATE_USER,
  RETRIEVE_USERS,
  UPDATE_USER,
  DELETE_USER
} from "./types";
import UserDataService from "../services/user.service";
export const createUser = (username, password, email, firstName, lastName) => async (dispatch) => {
  try {
    const res = await UserDataService.create({ username, password, email, firstName, lastName });
    dispatch({
      type: CREATE_USER,
      payload: res.data,
    });
    return Promise.resolve(res.data);
  } catch (err) {
    return Promise.reject(err);
  }
};
export const findUsersByEmail = (email) => async (dispatch) => {
  try {
    const res = await UserDataService.findByEmail(email);
    dispatch({
      type: RETRIEVE_USERS,
      payload: res.data,
    });
  } catch (err) {
    console.error(err);
  }
};
export const updateUser = (id, data) => async (dispatch) => {
  try {
    const res = await UserDataService.update(id, data);
    dispatch({
      type: UPDATE_USER,
      payload: data,
    });
    return Promise.resolve(res.data);
  } catch (err) {
    return Promise.reject(err);
  }
};
export const deleteUser = (id) => async (dispatch) => {
  try {
    await UserDataService.delete(id);
    dispatch({
      type: DELETE_USER,
      payload: { id },
    });
  } catch (err) {
    console.error(err);
  }
};

Example of a React reducer

${react-project-root}/src/reducers/users.js

import {
  CREATE_USER,
  RETRIEVE_USERS,
  UPDATE_USER
  DELETE_USER,
} from "../actions/types";
const initState = [];
function userReducer(users = initState, action) {
  const { type, payload } = action;
  switch (type) {
    case CREATE_USER:
      return [...users, payload];
    case RETRIEVE_USERS:
      return payload;
    case UPDATE_USER:
      return users.map((user) => {
        if (user.id === payload.id) {
          return {
            ...user,
            ...payload,
          };
        } else {
          return user;
        }
      });
    case DELETE_USER:
      return users.filter(({ id }) => id !== payload.id);
    default:
      return users;
  }
};
export default userReducer;

React components

Using React Hooks which are built-in functions, the UI-centric React components harness powerful features related to handling states, programmatic properties, parametric attributes, and more.

To dispatch an action, the useDispatch hook for React Redux can be used that might look like below:

import { useDispatch, useSelector } from "react-redux";
...
  const dispatch = useDispatch();
  ...
    dispatch(myAction(someRecord.id, someRecord))  // Corresponding service returns a promise
      .then((response) => {
        setMessage("myAction successful!");
        ...
      })
      .catch(err => {
        ...
      });
  ...

And to retrieve the state of a certain item from the Redux store, the userSelector hook allow one to use a selector function to extract the target item as follows:

  const myRecords = useSelector(state => state.myRecords);  // Reducer myRecords.js

Example of a React component

${react-project-root}/src/components/UserList.js

import React, { useState, useEffect } from "react";
import { useDispatch, useSelector } from "react-redux";
import { Link } from "react-router-dom";
import { retrieveUsers, findUsersByEmail } from "../actions/user";
const UserList = () => {
  const dispatch = useDispatch();
  const users = useSelector(state => state.users);
  const [currentUser, setCurrentUser] = useState(null);
  const [currentIndex, setCurrentIndex] = useState(-1);
  const [searchEmail, setSearchEmail] = useState("");
  useEffect(() => {
    dispatch(retrieveUsers());
  }, [dispatch]);
  const onChangeSearchEmail = e => {
    const searchEmail = e.target.value;
    setSearchEmail(searchEmail);
  };
  const refreshData = () => {
    setCurrentUser(null);
    setCurrentIndex(-1);
  };
  const setActiveUser = (user, index) => {
    setCurrentUser(user);
    setCurrentIndex(index);
  };
  const findByEmail = () => {
    refreshData();
    dispatch(findUsersByEmail(searchEmail));
  };
  return (
    <div className="list row">
      <div className="col-md-9">
        <div className="input-group mb-3">
          <input
            type="text"
            className="form-control"
            id="searchByEmail"
            placeholder="Search by email"
            value={searchEmail}
            onChange={onChangeSearchEmail}
          />
          <div className="input-group-append">
            <button
              className="btn btn-warning m-2"
              type="button"
              onClick={findByEmail}
            >
              Search
            </button>
          </div>
        </div>
      </div>
      <div className="col-md-5">
        <h4>User List</h4>
        <ul className="list-group">
          {users &&
            users.map((user, index) => (
              <li
                className={
                  "list-group-item " + (index === currentIndex ? "active" : "")
                }
                onClick={() => setActiveUser(user, index)}
                key={index}
              >
                <div className="row">
                  <div className="col-md-2">{user.id}</div>
                  <div className="col-md-10">{user.email}</div>
                </div>
              </li>
            ))}
        </ul>
        <Link to="/add-user"
          className="btn btn-warning mt-2 mb-2"
        >
          Create a user
        </Link>
      </div>
      <div className="col-md-7">
        {currentUser ? (
          <div>
            <h4>User</h4>
            <div className="row">
              <div className="col-md-3 fw-bold">ID:</div>
              <div className="col-md-9">{currentUser.id}</div>
            </div>
            <div className="row">
              <div className="col-md-3 fw-bold">Username:</div>
              <div className="col-md-9">{currentUser.username}</div>
            </div>
            <div className="row">
              <div className="col-md-3 fw-bold">Email:</div>
              <div className="col-md-9">{currentUser.email}</div>
            </div>
            <div className="row">
              <div className="col-md-3 fw-bold">First Name:</div>
              <div className="col-md-9">{currentUser.firstName}</div>
            </div>
            <div className="row">
              <div className="col-md-3 fw-bold">Last Name:</div>
              <div className="col-md-9">{currentUser.lastName}</div>
            </div>
            <Link
              to={"/user/" + currentUser.id}
              className="btn btn-warning mt-2 mb-2"
            >
              Edit
            </Link>
          </div>
        ) : (
          <div>
            <br />
            <p>Please click on a user for details ...</p>
          </div>
        )}
      </div>
    </div>
  );
};
export default UserList;

It should be noted that, despite having been stripped down for simplicity, the above sample code might still have included a little bit too much details for React beginners. For now, the primary goal is to highlight how an action is powered by function dispatch() in accordance with a certain UI event to interactively update state in the Redux central store thru a corresponding reducer function.

In the next blog post, we’ll dive a little deeper into React components and how they have evolved from the class-based OOP (object oriented programming) to the FP (functional programming) style with React Hooks.

Node.js, PostgreSQL With Sequelize

A recent project has prompted me to adopt Node.js, a popular by-design lean and mean server, as the server-side tech stack. With the requirement for a rather UI feature-rich web application, I include React (a.k.a. ReactJS) as part of the tech stack. A backend database is needed, so I pick PostgreSQL. Thus, this is a deviation from the Scala / Akka Actor / Akka Stream tech stack I’ve been using in recent years.

PostgreSQL has always been one of my favorite database choices whenever a robust RDBMS with decent scalability is required for a given R&D project. With Node.js being the chosen app/API server and React the UI library for the project at hands, I decided to use Sequelize, a popular ORM tool in the Node ecosystem, as the ORM tool.

First and foremost, I must acknowledge the effective documentation on Sequelize’s official website, allowing developers new to it to quickly pick up the essential know-how’s from:

to the more advanced topics like:

Getting started

Assuming the Node.js module is already in place, to install PostgreSQL driver and Sequelize, simply do the following under the Node project root subdirectory:

$ npm install --save pg pg-hstore
$ npm install --save sequelize

Next, create a configuration script ${node-project-root}/app/config/db.config.js for PostgreSQL like below:

module.exports = {
  HOST: "localhost",
  USER: "leo",
  PASSWORD: "changeme!",
  DB: "leo",
  dialect: "postgres",
  pool: {
    max: 5,
    min: 0,
    acquire: 30000,
    idle: 10000
  }
};

For the data model, let’s create script files for a few sample tables under ${node-project-root}/app/models/:

# user.model.js 

module.exports = (sequelize, Sequelize) => {
  const User = sequelize.define("users", {
    username: {
      type: Sequelize.STRING
    },
    email: {
      type: Sequelize.STRING
    },
    password: {
      type: Sequelize.STRING
    },
    firstName: {
      type: Sequelize.STRING
    },
    lastName: {
      type: Sequelize.STRING
    }
  });
  return User;
};
# role.model.js

module.exports = (sequelize, Sequelize) => {
  const Role = sequelize.define("roles", {
    id: {
      type: Sequelize.INTEGER,
      primaryKey: true
    },
    name: {
      type: Sequelize.STRING
    }
  });
  return Role;
};
# order.model.js

module.exports = (sequelize, Sequelize) => {
  const Order = sequelize.define("orders", {
    orderDate: {
      type: Sequelize.DATE
    },
    userId: {
      type: Sequelize.INTEGER
    },
    // add other attributes here ...
  });
  return Order;
};
# item.model.js

module.exports = (sequelize, Sequelize) => {
  const Item = sequelize.define("items", {
    serialNum: {
      type: Sequelize.STRING
    },
    orderId: {
      type: Sequelize.INTEGER
    },
    // add other attributes here ...
  });
  return Item;
};

Sequelize instance

Note that within the above data model scripts, each of the table entities is represented by a function with two arguments — Sequelize refers to the Sequelize library, whereas sequelize is an instance of it. The instance is what’s required to connect to a given database. It has a method define() responsible for specifying the table definition including the table attributes and the by-default pluralized table name.

Also note that it looks as though the typical primary key column id is missing in most of the above table definitions. That’s because Sequelize would automatically create an auto-increment integer column id if none is specified. For a table intended to be set up with specific primary key values, define it with explicitly (similar to how table roles is set up in our sample models).

The Sequelize instance is created and initialized within ${node-project-root}/app/models/index.js as shown below.

# ${node-project-root}/app/models/index.js

const config = require("../config/db.config.js");
const Sequelize = require("sequelize");
const sequelize = new Sequelize(
  config.DB,
  config.USER,
  config.PASSWORD,
  {
    host: config.HOST,
    dialect: config.dialect,
    pool: {
      max: config.pool.max,
      min: config.pool.min,
      acquire: config.pool.acquire,
      idle: config.pool.idle
    }
  }
);
const db = {};
db.Sequelize = Sequelize;
db.sequelize = sequelize;
db.user = require("../models/user.model.js")(sequelize, Sequelize);
db.role = require("../models/role.model.js")(sequelize, Sequelize);
db.order = require("../models/order.model.js")(sequelize, Sequelize);
db.item = require("../models/item.model.js")(sequelize, Sequelize);
db.role.belongsToMany(db.user, {
  through: "user_role"
});
db.user.belongsToMany(db.role, {
  through: "user_role"
});
db.user.hasMany(db.order, {
  as: "order"
});
db.order.belongsTo(db.user, {
  foreignKey: "userId",
  as: "user"
});
db.order.hasMany(db.item, {
  as: "item"
});
db.item.belongsTo(db.order, {
  foreignKey: "orderId",
  as: "order"
});
db.ROLES = ["guest", "user", "admin"];
module.exports = db;

Data model associations

As can be seen from the index.js data model script, after a Sequelize instance is instantiated, it loads the database configuration information from db.config.js as well as the table definitions from the individual model scripts.

Also included in the index.js script are examples of both the one-to-many and many-to-many association types. For instance, the relationship between table users and orders is one-to-many with userId as the foreign key:

db.user.hasMany(db.order, {
  as: "order"
});
db.order.belongsTo(db.user, {
  foreignKey: "userId",
  as: "user"
});

whereas relationship between users and roles is many-to-many.

db.role.belongsToMany(db.user, {
  through: "user_role"
});
db.user.belongsToMany(db.role, {
  through: "user_role"
});

Database schema naming conventions

Contrary to the camelCase naming style for variables in programming languages such as JavaScript, Java, Scala, conventional RDBMSes tend to use snake_case naming style for table and column names. To accommodate the different naming conventions, Sequelize automatically converts database schemas’ snake_case style to JavaScript objects’ camelCase. To keep the database schema in snake_case style one can customize the Sequelize instance by specifying underscored: true within the define {} segment as shown below.

As mentioned in an earlier section, Sequelize pluralizea database table names by default. To suppress the auto-pluralization, specifying also freezeTableName: true within define {} followed by defining the table with singular names within the individual model scripts.

const sequelize = new Sequelize(
  config.DB,
  config.USER,
  config.PASSWORD,
  {
    host: config.HOST,
    dialect: config.dialect,
    pool: {
      max: config.pool.max,
      min: config.pool.min,
      acquire: config.pool.acquire,
      idle: config.pool.idle
    },
    define: {
      underscored: true,
      freezeTableName: true
    }
  }
);

An “inconvenience” in PostgreSQL

Personally, I prefer keeping database table names singular. However, I have a table I’d like to name it user which is disallowed within PostgreSQL’s default schema namespace. That’s because PostgreSQL makes user a reserved keyword.

A work-around would be to define a custom schema that serves as a namespace in which all user-defined entities are contained. An inconvenient consequence is that when performing queries using tools like psql, one would need to alter the schema search path from the default public schema to the new one.

ALTER ROLE leo SET search_path TO myschema;

After weighing the pros and cons, I decided to go with Sequelize‘s default pluralized table naming. Other than this minor inconvenience, I find Sequelize an easy-to-pick-up ORM for wiring programmatic CRUD operations with PostgreSQL from within Node’s controller modules.

The following sample snippet highlights what a simple find-by-primary-key select and update might look like in a Node controller:

const db = require("../models");
const User = db.user;
...

exports.find = (req, res) => {
  const id = req.params.id;
  User.findByPk(id)
    .then(data => {
      if (data) {
        res.send(data);
      } else {
        res.status(404).send({
          message: `ERROR finding user with id=${id}!`
        });
      }
    })
    .catch(err => {
      res.status(500).send({
        message: `ERROR retrieving user data!`
      });
    });
};

exports.update = (req, res) => {
  const id = req.params.id;
  User.update(req.body, {
    where: { id: id }
  })
    .then(num => {
      if (num == 1) {
        res.send({
          message: "User was updated successfully!"
        });
      } else {
        res.send({
          message: `ERROR updating user with id=${id}!`
        });
      }
    })
    .catch(err => {
      res.status(500).send({
        message: `ERROR updating user data!`
      });
    });
};

In the next blog post, we’ll shift our focus towards the popular UI library React and how state changes propagate across the UI components and the React Redux central store.

Self-contained Node.js Deployment

While setting up a Node.js environment on an individual developer’s machine can be done in a casual manner and oftentimes can be tailored to the developer’s own taste, deploying Node.js applications on shared or production servers requires a little more planning in advance.

To install Node.js on a server, a straight forward approach is to just follow some quick-start instructions from an official source. For instance, assuming latest v.4.x of Node.js is the target version and CentOS Linux is the OS on the target server, the installation can be as simple as follows:

# Install EPEL (Extra Packages for Enterprise Linux)
sudo yum install epel-release

# Run Node.js pre-installation setup
curl -sL https://rpm.nodesource.com/setup_4.x | bash -

# Install Node.js
sudo yum install -y nodejs

For Ubuntu:

# Install Node.js on Ubuntu
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Software version: Latest versus Same

However, the above installation option leaves the version of the installed Node.js out of your own control. Although the major release would stick to v.4, the latest update to Node available at the time of the command execution will be installed.

There are debates about always-getting-the-latest versus keeping-the-same-version when it comes to software installation. My take is that on individual developer’s machine, you’re at liberty to go for ‘latest’ or ‘same’ to suit your own need (for exploring experimental features versus getting ready for production support). But on servers for staging, QA, or production, I would stick to ‘same’.

Some advocates of ‘latest’ even for production servers argue that not doing so could compromise security on the servers. It’s a valid concern but stability is also a critical factor. My recommendation is to keep version on critical servers consistent while making version update for security a separate and independently duty, preferably handled by a separate operations staff.

Onto keeping a fixed Node.js version

As of this writing, the latest LTS (long-term-support) release of Node.js is v.4.4.7. The next LTS (v.6.x) is scheduled to be out in the next quarter of the year. There are a couple of options. Again, let’s assume we’re on CentOS, and that it’s CentOS 7 64-bit. There are a couple of options.

Option 1: Build from source

# Install Node.js v4.4.7 - Build from source
mkdir ~/nodejs
cd ~/nodejs

curl http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz | tar xz --strip-components=1

./configure --prefix=/usr/local

sudo make install

As a side note, if you’re on CentOS 6 or older, you’ll need to update gcc and Python.

Option 2: Use pre-built binary

# Install Node.js v4.4.7 - Linux binary (64bit)
mkdir ~/nodejs
cd ~/nodejs
curl http://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.gz | tar xz --strip-components=1

# Install Node under /opt
mkdir ~/nodejs/etc
echo 'prefix=/usr/local' > nodejs/etc/npmrc
sudo mv nodejs /opt/
sudo chown -R root:root /opt/nodejs

# Create soft links in standard search path
sudo ln -s /opt/nodejs/bin/node /usr/local/bin/node
sudo ln -s /opt/nodejs/bin/npm /usr/local/bin/npm

Note that both the above two options install a system-wide Node.js (which comes with the default package manager NPM) accessible to all legitimate users on the server host.

Node process manager

Next, install a process manager to manage processes of the Node app, providing features such as auto-restart. Two of the most prominent ones are forever and pm2. Let’s go with the slightly more robust one, pm2. Check for the latest version from the pm2 website and specify it in the npm install command:

# Install global pm2 v1.1.3
sudo npm install -g pm2@1.1.3

# Verify installed pm2
cd /usr/local/lib
npm list | grep pm2

Deploying self-contained Node.js

Depending on specific deployment requirements, one might prefer having Node confined to a local file structure that belongs to a designated user on the server host. Contrary to having a system-wide Node.js, this approach would equip each of your Node projects with its own Node.js binary and modules.

Docker, as briefly touched on in a previous blog, would be a good tool in such use case, but one can also handle it without introducing an OS-level virtualization layer. Here’s how Node.js can be installed underneath a local Node.js project directory:

# Project directory of your Node.js app
PROJDIR="/path/to/MyNodeApp"

# Install local Node.js v4.4.7 Linux binary (64bit)
mkdir $PROJDIR/nodejs
cd $PROJDIR/nodejs
curl http://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.gz | tar xz --strip-components=1

# Install local pm2 v1.1.3
# pm2 will be installed under $PROJDIR/nodejs/lib/node_modules/pm2/bin/
cd $PROJDIR/nodejs/lib
sudo $PROJDIR/nodejs/bin/npm install pm2@1.1.3
$PROJDIR/nodejs/bin/npm list | grep pm2

Next, create simple scripts to start/stop the local Node.js app (assuming main Node app is app.js):

Script: $PROJDIR/bin/njsenv.sh (sourced by start/stop scripts)

# $PROJDIR/bin/njsenv.sh

#!/bin/bash

ENVSCRIPT="$0"

# Get absolute filepath of this setenv script
ENVBINPATH="$( cd "$( dirname "$ENVSCRIPT" )" && pwd )"

# Get absolute filepath of the Nodejs project
PROJPATH="$( cd "$ENVBINPATH" && cd ".." && pwd )"

# Get absolute filepath of the Nodejs bin
NJSBINPATH="$( cd "$PROJPATH" && cd nodejs/bin && pwd )"

# Get absolute filepath of the process manager
PMGRPATH=${PROJPATH}/nodejs/lib/node_modules/pm2/bin

# Function for prepending a path segment that is not yet in PATH
pathprepend() {
    for ARG in "$@"
    do
        if [ -d "$ARG" ] && [[ ":$PATH:" != *":$ARG:"* ]]; then
            PATH="$ARG${PATH:+":$PATH"}"
        fi
    done
}

pathprepend "$PMGRPATH" "$NJSBINPATH"

echo "PATH: $PATH"

Script: $PROJDIR/bin/start.sh

#!/bin/bash

SCRIPT="$0"

# Get absolute filepath of this script
BINPATH="$( cd "$( dirname "$SCRIPT" )" && pwd )"

if [ ! -f "${BINPATH}/njsenv.sh" ]
then
    echo "${BINPATH}/njsenv.sh cannot be found! Aborting ..."
    exit 0
fi

# Set env for PATH and project/app:
#   PATH       = Linux path
#   PROJPATH   = Nodejs project
#   NJSBINPATH = Nodejs bin
#   PMGRPATH   = pm2 path
source ${BINPATH}/njsenv.sh

NODEAPP="main.js"

PMGR=${PMGRPATH}/pm2

echo "Starting $NODEAPP at $PROJPATH ..."

CMD="cd $PROJPATH && $PMGR start $NODEAPP"

# Start Nodejs main app
eval $CMD

echo "Command executed: $CMD"

Script: $PROJDIR/bin/stop.sh

#!/bin/bash

SCRIPT="$0"

# Get absolute filepath of this script
BINPATH="$( cd "$( dirname "$SCRIPT" )" && pwd )"

if [ ! -f "${BINPATH}/njsenv.sh" ]
then
    echo "${BINPATH}/njsenv.sh cannot be found! Aborting ..."
    exit 0
fi

# Set env for PATH and project/app:
#   PATH       = Linux path
#   PROJPATH   = Nodejs project
#   NJSBINPATH = Nodejs bin
#   PMGRPATH   = pm2 path
source ${BINPATH}/njsenv.sh

# Set PATH env
source ${BINPATH}/njsenv.sh

PMGR=${PMGRPATH}/pm2

echo "Stopping all Node.js processes ..."

CMD="cd $PROJPATH && $PMGR stop all"

# Stop all Nodejs processes
eval $CMD

echo "Command executed: $CMD"

It would make sense to organize such scripts in, say, a top-level bin/ subdirectory. Along with the typical file structure of your Node app such as controllers, routes, configurations, etc, your Node.js project directory might now look like the following:

$PROJDIR/
    app.js
    bin/
      njsenv.sh
      start.sh
      stop.sh
    config/
    controllers/
    log/
    models/
    nodejs/
        bin/
        lib/
            node_modules/
    node_modules/
    package.json
    public/
    routes/
    views/

Packaging/Bundling your Node.js app

Now that the key Node.js software modules are in place all within a local $PROJDIR subdirectory, next in line is to shift the focus to your own Node app and create some simple scripts for bundling the app.

This blog post is aimed to cover relatively simple deployment cases in which there isn’t need for environment-specific code build. Should such need arise, chances are that you might already be using a build automation tool such as gulp, which was heavily used by a Node app in a recent startup I cofounded. In addition, if the deployment requirements are complex enough, configuration management/automation tools like Puppet, SaltStack or Chef might also be used.

For simple Node.js deployment that the app modules can be pre-built prior to deployment, one can simply come up with simple scripts to pre-package the app in a tar ball, which then gets expanded in the target server environments.

To better manage files for the packaging/bundling task, it’s a good practice to maintain a list of files/directories to be included in a text file, say, include.files. For instance, if there is no need for environment-specific code build, package.json doesn’t need to be included when packaging in the QA/production environment. While at it, keep also a file, exclude.files that list all the files/directories to be excluded. For example:

# File include.files:
app.js
config
controllers
models
node_modules
nodejs
public
routes
views

# File exclude.files:
.DS_Store
.git

Below is a simple shell script which does the packaging/bundling of a localized Node.js project:

#!/bin/bash

# Project directory of your Node.js app
PROJDIR="/path/to/MyNodeApp"

# Extract package name and version
NAME=`grep -o '"name"[ \t]*:[ \t]*"[^"]*"' $PROJDIR/package.json | sed -n 's/.*:[ \t]*"\([^"]*\)"/\1/p'`
VERSION=`grep -o '"version"[ \t]*:[ \t]*"[^"]*"' $PROJDIR/package.json | sed -n 's/.*:[ \t]*"\([^"]*\)"/\1/p'`

if [ "$NAME" = "" ]  || [ "$VERSION" = "" ]; then
  echo "ERROR: Package name or version not found! Exiting ..."
  exit 1
fi

# Copy files/directories based on 'files.include' to the bundle subdirectory
cd $PROJDIR

# Create/Recreate bundle subdirectory
rm -rf bundle
mkdir bundle
mkdir bundle/$NAME

for file in `cat include.files`; do cp -rp "$file" bundle/$NAME ; done

# Tar-gz content excluding files/directories based on 'exclude.files'
cd bundle
tar --exclude-from=../exclude.files -czf $NAME-$VERSION.tar.gz $NAME
if [ $? -eq 0 ]; then
  echo "Bundle created under $PROJDIR/bundle: $NAME-$VERSION.tar.gz
else
  echo "ERROR: Bundling failed!"
fi

rm -rf $NAME

Run bundling scripts from within package.json

An alternative to doing the packaging/bundling with external scripts is to make use of npm’s features. The popular Node package manager comes with file exclusion rules based on files listed in .npmignore and .gitignore. It also comes with scripting capability that to handle much of what’s just described. For example, one could define custom file inclusion variable within package.json and executable scripts to do the packaging/bundling using the variables in the form of $npm_package_{var} like the following:

  "name": "mynodeapp",
  "version": "1.0.0",
  "main": "app.js",
  "description": "My Node.js App",
  "author": "Leo C.",
  "license": "ISC",
  "dependencies": {
    "config": "~1.21.0",
    "connect-redis": "~3.1.0",
    "express": "~4.14.0",
    "gulp": "~3.9.1",
    "gulp-mocha": "~2.2.0",
    "helmet": "~2.1.1",
    "lodash": "~4.13.1",
    "mocha": "~2.5.3",
    "passport": "~0.3.2",
    "passport-local": "~1.0.0",
    "pg": "^6.0.3",
    "pg-native": "^1.10.0",
    "q": "~1.4.1",
    "redis": "^2.6.2",
    "requirejs": "~2.2.0",
    "swig": "~1.4.2",
    "winston": "~2.2.0",
  },
  "bundleinclude": "app.js config/ controllers/ models/ node_modules/ nodejs/ public/ routes/ views/",
  "scripts": {
    "bundle": "rm -rf bundle && mkdir bundle && mkdir bundle/$npm_package_name && cp -rp $npm_package_bundleinclude bundle/$npm_package_name && cd bundle && tar --exclude-from=../.npmignore -czf $npm_package_name-$npm_package_version.tgz $npm_package_name && rm -rf $npm_package_name"
  }
}

Here comes another side note: In the dependencies section, a version with prefix ~ qualifies any version with patch-level update (e.g. ~1.2.3 allows any 1.2.x update), whereas prefix ^ qualifies minor-level update (e.g. ^1.2.3 allows any 1.x.y update).

To deploy the Node app on a server host, simply scp the bundled tar ball to the designated user on the host (e.g. scp $NAME-$VERSION.tgz njsapp@:package/) use a simple script similar to the following to extract the bundled tar ball on the host and start/stop the Node app:

#!/bin/bash

if [ $# -ne 2 ]
then
    echo "Usage: $0  "
    echo "  e.g. $0 mynodeapp ~/package/mynodeapp-1.0.0.tar.gz"
    exit 0
fi

APPNAME="$1"
PACKAGE="$2"

# Deployment location of your Node.js app
DEPLOYDIR="/path/to/DeployDirectory"

cd $DEPLOYDIR
tar -xzf $PACKAGE

if [ $? -ne 0 ]; then
  echo "Package $PACKAGE extracted under $DEPLOYDIR"
else
  echo "ERROR: Failed to extract $PACKAGE! Exiting ..."
  exit 1
fi

# Start Node app
$DEPLOYDIR/$APPNAME/bin/start.sh

Deployment requirements can be very different for individual engineering operations. All that has been suggested should be taken as simplified use cases. The main objective is to come up with a self-contained Node.js application so that the developers can autonomously package their code with version-consistent Node binary and dependencies. A big advantage of such approach is separation of concern, so that the OPS team does not need to worry about Node installation and versioning.