Tag Archives: rdbms

Node.js, PostgreSQL With Sequelize

A recent project has prompted me to adopt Node.js, a popular by-design lean and mean server, as the server-side tech stack. With the requirement for a rather UI feature-rich web application, I include React (a.k.a. ReactJS) as part of the tech stack. A backend database is needed, so I pick PostgreSQL. Thus, this is a deviation from the Scala / Akka Actor / Akka Stream tech stack I’ve been using in recent years.

PostgreSQL has always been one of my favorite database choices whenever a robust RDBMS with decent scalability is required for a given R&D project. With Node.js being the chosen app/API server and React the UI library for the project at hands, I decided to use Sequelize, a popular ORM tool in the Node ecosystem, as the ORM tool.

First and foremost, I must acknowledge the effective documentation on Sequelize’s official website, allowing developers new to it to quickly pick up the essential know-how’s from:

to the more advanced topics like:

Getting started

Assuming the Node.js module is already in place, to install PostgreSQL driver and Sequelize, simply do the following under the Node project root subdirectory:

$ npm install --save pg pg-hstore
$ npm install --save sequelize

Next, create a configuration script ${node-project-root}/app/config/db.config.js for PostgreSQL like below:

module.exports = {
  HOST: "localhost",
  USER: "leo",
  PASSWORD: "changeme!",
  DB: "leo",
  dialect: "postgres",
  pool: {
    max: 5,
    min: 0,
    acquire: 30000,
    idle: 10000
  }
};

For the data model, let’s create script files for a few sample tables under ${node-project-root}/app/models/:

# user.model.js 

module.exports = (sequelize, Sequelize) => {
  const User = sequelize.define("users", {
    username: {
      type: Sequelize.STRING
    },
    email: {
      type: Sequelize.STRING
    },
    password: {
      type: Sequelize.STRING
    },
    firstName: {
      type: Sequelize.STRING
    },
    lastName: {
      type: Sequelize.STRING
    }
  });
  return User;
};
# role.model.js

module.exports = (sequelize, Sequelize) => {
  const Role = sequelize.define("roles", {
    id: {
      type: Sequelize.INTEGER,
      primaryKey: true
    },
    name: {
      type: Sequelize.STRING
    }
  });
  return Role;
};
# order.model.js

module.exports = (sequelize, Sequelize) => {
  const Order = sequelize.define("orders", {
    orderDate: {
      type: Sequelize.DATE
    },
    userId: {
      type: Sequelize.INTEGER
    },
    // add other attributes here ...
  });
  return Order;
};
# item.model.js

module.exports = (sequelize, Sequelize) => {
  const Item = sequelize.define("items", {
    serialNum: {
      type: Sequelize.STRING
    },
    orderId: {
      type: Sequelize.INTEGER
    },
    // add other attributes here ...
  });
  return Item;
};

Sequelize instance

Note that within the above data model scripts, each of the table entities is represented by a function with two arguments — Sequelize refers to the Sequelize library, whereas sequelize is an instance of it. The instance is what’s required to connect to a given database. It has a method define() responsible for specifying the table definition including the table attributes and the by-default pluralized table name.

Also note that it looks as though the typical primary key column id is missing in most of the above table definitions. That’s because Sequelize would automatically create an auto-increment integer column id if none is specified. For a table intended to be set up with specific primary key values, define it with explicitly (similar to how table roles is set up in our sample models).

The Sequelize instance is created and initialized within ${node-project-root}/app/models/index.js as shown below.

# ${node-project-root}/app/models/index.js

const config = require("../config/db.config.js");
const Sequelize = require("sequelize");
const sequelize = new Sequelize(
  config.DB,
  config.USER,
  config.PASSWORD,
  {
    host: config.HOST,
    dialect: config.dialect,
    pool: {
      max: config.pool.max,
      min: config.pool.min,
      acquire: config.pool.acquire,
      idle: config.pool.idle
    }
  }
);
const db = {};
db.Sequelize = Sequelize;
db.sequelize = sequelize;
db.user = require("../models/user.model.js")(sequelize, Sequelize);
db.role = require("../models/role.model.js")(sequelize, Sequelize);
db.order = require("../models/order.model.js")(sequelize, Sequelize);
db.item = require("../models/item.model.js")(sequelize, Sequelize);
db.role.belongsToMany(db.user, {
  through: "user_role"
});
db.user.belongsToMany(db.role, {
  through: "user_role"
});
db.user.hasMany(db.order, {
  as: "order"
});
db.order.belongsTo(db.user, {
  foreignKey: "userId",
  as: "user"
});
db.order.hasMany(db.item, {
  as: "item"
});
db.item.belongsTo(db.order, {
  foreignKey: "orderId",
  as: "order"
});
db.ROLES = ["guest", "user", "admin"];
module.exports = db;

Data model associations

As can be seen from the index.js data model script, after a Sequelize instance is instantiated, it loads the database configuration information from db.config.js as well as the table definitions from the individual model scripts.

Also included in the index.js script are examples of both the one-to-many and many-to-many association types. For instance, the relationship between table users and orders is one-to-many with userId as the foreign key:

db.user.hasMany(db.order, {
  as: "order"
});
db.order.belongsTo(db.user, {
  foreignKey: "userId",
  as: "user"
});

whereas relationship between users and roles is many-to-many.

db.role.belongsToMany(db.user, {
  through: "user_role"
});
db.user.belongsToMany(db.role, {
  through: "user_role"
});

Database schema naming conventions

Contrary to the camelCase naming style for variables in programming languages such as JavaScript, Java, Scala, conventional RDBMSes tend to use snake_case naming style for table and column names. To accommodate the different naming conventions, Sequelize automatically converts database schemas’ snake_case style to JavaScript objects’ camelCase. To keep the database schema in snake_case style one can customize the Sequelize instance by specifying underscored: true within the define {} segment as shown below.

As mentioned in an earlier section, Sequelize pluralizea database table names by default. To suppress the auto-pluralization, specifying also freezeTableName: true within define {} followed by defining the table with singular names within the individual model scripts.

const sequelize = new Sequelize(
  config.DB,
  config.USER,
  config.PASSWORD,
  {
    host: config.HOST,
    dialect: config.dialect,
    pool: {
      max: config.pool.max,
      min: config.pool.min,
      acquire: config.pool.acquire,
      idle: config.pool.idle
    },
    define: {
      underscored: true,
      freezeTableName: true
    }
  }
);

An “inconvenience” in PostgreSQL

Personally, I prefer keeping database table names singular. However, I have a table I’d like to name it user which is disallowed within PostgreSQL’s default schema namespace. That’s because PostgreSQL makes user a reserved keyword.

A work-around would be to define a custom schema that serves as a namespace in which all user-defined entities are contained. An inconvenient consequence is that when performing queries using tools like psql, one would need to alter the schema search path from the default public schema to the new one.

ALTER ROLE leo SET search_path TO myschema;

After weighing the pros and cons, I decided to go with Sequelize‘s default pluralized table naming. Other than this minor inconvenience, I find Sequelize an easy-to-pick-up ORM for wiring programmatic CRUD operations with PostgreSQL from within Node’s controller modules.

The following sample snippet highlights what a simple find-by-primary-key select and update might look like in a Node controller:

const db = require("../models");
const User = db.user;
...

exports.find = (req, res) => {
  const id = req.params.id;
  User.findByPk(id)
    .then(data => {
      if (data) {
        res.send(data);
      } else {
        res.status(404).send({
          message: `ERROR finding user with id=${id}!`
        });
      }
    })
    .catch(err => {
      res.status(500).send({
        message: `ERROR retrieving user data!`
      });
    });
};

exports.update = (req, res) => {
  const id = req.params.id;
  User.update(req.body, {
    where: { id: id }
  })
    .then(num => {
      if (num == 1) {
        res.send({
          message: "User was updated successfully!"
        });
      } else {
        res.send({
          message: `ERROR updating user with id=${id}!`
        });
      }
    })
    .catch(err => {
      res.status(500).send({
        message: `ERROR updating user data!`
      });
    });
};

In the next blog post, we’ll shift our focus towards the popular UI library React and how state changes propagate across the UI components and the React Redux central store.

PostgreSQL Table Partitioning

With the ever growing demand for data science work in recent years, PostgreSQL has gained superb popularity especially in areas where extensive geospatial/GIS (geographic information system) functionality is needed. In a previous startup venture, MySQL was initially adopted and I went through the trouble of migrating to PostgreSQL mainly because of the sophisticated geospatial features PostGIS offers.

PostgreSQL offers a lot of goodies, although it does have a few things that I wish were done differently. Most notable to me is that while its SELECT statement supports SQL-92 Standard’s JOIN syntax, its UPDATE statement would not. For instance, the following UPDATE statement would not work in PostgreSQL:

--
-- SQL-92 compliance JOIN
--
UPDATE X
INNER JOIN Y ON X.yid = Y.id
LEFT JOIN Z ON Y.zid = Z.id
SET
    X.yname = Y.name,
    X.zname = Z.name
WHERE
    X.id IN (101,102,103)
;

Partial indexing

Nevertheless, for general performance and scalability, PostgreSQL remains one of the top candidates with proven track record in the world of open source RDBMS. In scaling up a PostgreSQL database, there is a wide variety of approaches. Suitable indexing is probably one of the first things to look into. Aside from planning out proper column orders in indexes that are optimal for the frequently used queries, there is another indexing feature that PostgreSQL provides for handling large datasets.

Partial indexing allows an index to be built over a subset of a table based on a conditional expression. For instance:

--
-- Partial Index
--
CREATE INDEX ix_unshipped_orders ON orders (order_num) WHERE shipped is FALSE;

In the case of a table with large amount of rows, this feature could make an otherwise gigantic index much smaller, thus more efficient for queries against the selectively indexed data.

Scaling up with table partitioning

However, when a table grows to certain volume, say, beyond a couple of hundreds of million rows, and if periodically archiving off data from the table isn’t an option, it would still be a problem even with applicable indexing strategy. In many cases, it might be necessary to do something directly with the table structure and table partitioning is often a good solution.

There are a few approaches to partition a PostgreSQL table. Among them, partitioning by means of table inheritance is perhaps the most popular approach. A master table will be created as a template that defines the table structure. This master table will be empty whereas a number of child tables inherited from this master table will actually host the data.

The partitioning is based on a partition key which can be a column or a combination of columns. In some common use cases, the partition keys are often date-time related. For instance, a partition key could be defined in a table to partition all sales orders by months with constraint like the following:

order_date >= ‘2016-12-01 00:00:00’ AND order_date < ‘2017-01-01 00:00:00’

Other common cases include partitioning geographically, etc.

A table partitioning example

When I was with a real estate startup building an application that involves over 100 millions nationwide properties, each with multiple attributes of interest, table partitioning was employed to address the demanding data volume. Below is a simplified example of how the property sale transaction table was partitioned to maintain a billion rows of data.

First, create the master table which will serve as the template for the table structure.

--
-- Create master table
--
DROP TABLE IF EXISTS property_sale CASCADE;

CREATE TABLE property_sale (
    "id" bigserial primary key,
    "state" character varying(20) not null,
    "property_type" character varying(100),
    "sale_date" timestamp with time zone,
    "sale_price" integer
);

Next, create child tables inheriting from the master table for the individual states. For simplicity, I only set up 24 states for performance evaluation.

--
-- Create child tables (for 24 selected states)
--
CREATE TABLE property_sale_ca ( CHECK ( state = 'CA' )) INHERITS (property_sale);
CREATE TABLE property_sale_ny ( CHECK ( state = 'NY' )) INHERITS (property_sale);
CREATE TABLE property_sale_tx ( CHECK ( state = 'TX' )) INHERITS (property_sale);
CREATE TABLE property_sale_il ( CHECK ( state = 'IL' )) INHERITS (property_sale);
CREATE TABLE property_sale_wa ( CHECK ( state = 'WA' )) INHERITS (property_sale);
CREATE TABLE property_sale_fl ( CHECK ( state = 'FL' )) INHERITS (property_sale);
CREATE TABLE property_sale_va ( CHECK ( state = 'VA' )) INHERITS (property_sale);
CREATE TABLE property_sale_co ( CHECK ( state = 'CO' )) INHERITS (property_sale);
CREATE TABLE property_sale_oh ( CHECK ( state = 'OH' )) INHERITS (property_sale);
CREATE TABLE property_sale_nv ( CHECK ( state = 'NV' )) INHERITS (property_sale);
CREATE TABLE property_sale_or ( CHECK ( state = 'OR' )) INHERITS (property_sale);
CREATE TABLE property_sale_pa ( CHECK ( state = 'PA' )) INHERITS (property_sale);
CREATE TABLE property_sale_ut ( CHECK ( state = 'UT' )) INHERITS (property_sale);
CREATE TABLE property_sale_ma ( CHECK ( state = 'MA' )) INHERITS (property_sale);
CREATE TABLE property_sale_ct ( CHECK ( state = 'CT' )) INHERITS (property_sale);
CREATE TABLE property_sale_la ( CHECK ( state = 'LA' )) INHERITS (property_sale);
CREATE TABLE property_sale_wi ( CHECK ( state = 'WI' )) INHERITS (property_sale);
CREATE TABLE property_sale_wy ( CHECK ( state = 'WY' )) INHERITS (property_sale);
CREATE TABLE property_sale_nm ( CHECK ( state = 'NM' )) INHERITS (property_sale);
CREATE TABLE property_sale_nj ( CHECK ( state = 'NJ' )) INHERITS (property_sale);
CREATE TABLE property_sale_nh ( CHECK ( state = 'NH' )) INHERITS (property_sale);
CREATE TABLE property_sale_mi ( CHECK ( state = 'MI' )) INHERITS (property_sale);
CREATE TABLE property_sale_md ( CHECK ( state = 'MD' )) INHERITS (property_sale);
CREATE TABLE property_sale_dc ( CHECK ( state = 'DC' )) INHERITS (property_sale);
CREATE TABLE property_sale_err ( CHECK ( state NOT IN (
    'CA', 'NY', 'TX', 'IL', 'WA', 'FL', 'VA', 'CO', 'OH', 'NV', 'OR', 'PA', 
    'UT', 'MA', 'CT', 'LA', 'WI', 'WY', 'NM', 'NJ', 'NH', 'MI', 'MD', 'DC' ))
) INHERITS (property_sale);

Nothing magical so far, until a suitable trigger for propagating insert is put in place. The trigger essentially redirects insert requests against the master table to the corresponding child tables.

--
-- Create trigger for insert
--
DROP FUNCTION IF EXISTS fn_insert_property_sale();
CREATE OR REPLACE FUNCTION fn_insert_property_sale()
RETURNS TRIGGER AS $
BEGIN
    CASE NEW.state
        WHEN 'CA' THEN
            INSERT INTO property_sale_ca VALUES (NEW.*);
        WHEN 'NY' THEN
            INSERT INTO property_sale_ny VALUES (NEW.*);
        WHEN 'TX' THEN
            INSERT INTO property_sale_tx VALUES (NEW.*);
        WHEN 'IL' THEN
            INSERT INTO property_sale_il VALUES (NEW.*);
        WHEN 'WA' THEN
            INSERT INTO property_sale_wa VALUES (NEW.*);
        WHEN 'FL' THEN
            INSERT INTO property_sale_fl VALUES (NEW.*);
        WHEN 'VA' THEN
            INSERT INTO property_sale_va VALUES (NEW.*);
        WHEN 'CO' THEN
            INSERT INTO property_sale_co VALUES (NEW.*);
        WHEN 'OH' THEN
            INSERT INTO property_sale_oh VALUES (NEW.*);
        WHEN 'NV' THEN
            INSERT INTO property_sale_nv VALUES (NEW.*);
        WHEN 'OR' THEN
            INSERT INTO property_sale_or VALUES (NEW.*);
        WHEN 'PA' THEN
            INSERT INTO property_sale_pa VALUES (NEW.*);
        WHEN 'UT' THEN
            INSERT INTO property_sale_ut VALUES (NEW.*);
        WHEN 'MA' THEN
            INSERT INTO property_sale_ma VALUES (NEW.*);
        WHEN 'CT' THEN
            INSERT INTO property_sale_ct VALUES (NEW.*);
        WHEN 'LA' THEN
            INSERT INTO property_sale_la VALUES (NEW.*);
        WHEN 'WI' THEN
            INSERT INTO property_sale_wi VALUES (NEW.*);
        WHEN 'WY' THEN
            INSERT INTO property_sale_wy VALUES (NEW.*);
        WHEN 'NM' THEN
            INSERT INTO property_sale_nm VALUES (NEW.*);
        WHEN 'NJ' THEN
            INSERT INTO property_sale_nj VALUES (NEW.*);
        WHEN 'NH' THEN
            INSERT INTO property_sale_nh VALUES (NEW.*);
        WHEN 'MI' THEN
            INSERT INTO property_sale_mi VALUES (NEW.*);
        WHEN 'MD' THEN
            INSERT INTO property_sale_md VALUES (NEW.*);
        WHEN 'DC' THEN
            INSERT INTO property_sale_dc VALUES (NEW.*);
        ELSE
            INSERT INTO property_sale_err VALUES (NEW.*);
    END CASE;
    RETURN NULL;
END
$
LANGUAGE plpgsql;

CREATE TRIGGER tr_insert_property_sale
    BEFORE INSERT ON property_sale
    FOR EACH ROW EXECUTE PROCEDURE fn_insert_property_sale();

Let’s test inserting data into the partitioned tables via the trigger:

--
-- Check created tables, functions, triggers
--
\dt property_sale*
\df fn_*
select tgname from pg_trigger where tgname like 'tr_%';

--
-- Test insert
--
INSERT INTO property_sale ("state", "property_id", "property_type", "sale_date", "sale_price") VALUES
('CA', 1008, 'Single Family House', '1990-05-05 09:30:00', 750000),
('NY', 2505, 'Apartment', '2002-01-30 12:00:00', 800000),
('ZZ', 9999, 'Single Family House', '2012-06-28 14:00:00', 500000),
('CA', 1008, 'Condominium', '2015-11-02 16:00:00', 1200000),
('TX', 3030, 'Condominium', '2006-04-20 11:15:00', 500000)
;

A Python program for data import

Now that the master table and its child tables are functionally in place, we’re going to populate them with large-scale data for testing. First, write a simple program using Python (or any other programming/scripting language) as follows to generate simulated data in a tab-delimited file for data import:

#!/usr/bin/python
import getopt, sys
import csv
import random
import time

def usage():
    print("Usage: %s [-h] [-r rows -f file]" % sys.argv[0])
    print(" e.g.: %s -r 20000 -f /db/data/test/pg_partitioning_infile.txt" % sys.argv[0])

def randDateTime(start, end, rand):
    format = '%Y-%m-%d %H:%M:%S'
    startTime = time.mktime(time.strptime(start, format))
    endTime = time.mktime(time.strptime(end, format))
    randTime = startTime + rand * (endTime - startTime)
    return time.strftime(format, time.localtime(randTime))

if __name__ == "__main__":

    rows = 0
    txtFile = ""

    try:
        opts, args = getopt.getopt(sys.argv[1:], "hr:f:", ["help", "rows=", "file="])
    except getopt.GetoptError as err:
        usage()
        sys.exit(2)

    for okey, oval in opts:
        if okey in ("-h", "--help"):
            usage()
            sys.exit()
        elif okey in ("-r", "--rows"):
            rows = int(oval)
        elif okey in ("-f", "--file"):
            txtFile = oval
        else:
            assert False, "unhandled option"

    print("rows = %d" % rows)

    stateList = ['CA', 'NY', 'TX', 'IL', 'WA', 'FL', 'VA', 'CO', 'OH', 'NV', 'OR', 'PA', \
                 'UT', 'MA', 'CT', 'LA', 'WI', 'WY', 'NM', 'NJ', 'NH', 'MI', 'MD', 'DC']
    propTypeList = ['Single Family', 'Condominium', 'Multi (2-4 units)', 'Duplex', 'Triplex', \
                    'Quadruplex', 'Miscellaneous', 'Mobile Home', 'Residential Vacant Land']

    with open(txtFile, 'w') as f:
        w = csv.writer(f, dialect = 'excel-tab')
        rowCount = 0

        for propId in range(1000, 200000000):
            state = random.choice(stateList)
            propType = random.choice(propTypeList)
            numSales = random.randint(1, 8)
            randPrice = random.randint(250, 2500) * 1000

            saleCount = 0
            while rowCount < rows and saleCount < numSales:
                saleDate = randDateTime("1980-01-01 00:00:00", "2017-01-01 00:00:00", random.random())
                salePrice = randPrice * (1.0 + random.randint(-20, 20) / 100.0)
                salePrice = int(salePrice / 1000.0) * 1000

                w.writerow([state, propId, propType, saleDate, salePrice])
                rowCount += 1
                saleCount += 1

            if rowCount >= rows:
                break

Run the Python program to generate up to 1 billion rows of property sale data. Given the rather huge output, make sure the generated file is on a storage device with plenty of space. Since it’s going to take some time to finish the task, it would better be run in the background, perhaps along with mail notification, like the following:

--
-- Run pg_partitioning_infile.py to create tab-delimited infile
--
cd /db/app/test/
nohup python pg_partitioning_infile.py -r 1000000000 -f /db/data/test/pg_partitioning_infile_1b.txt 2>&1 | mail -s "pg partitioning infile - creating 1-billion-row infile" me@mydomain.com &

Next, load data from the generated infile into the partitioned tables using psql. In case there are indexes created for the partitioned tables, it would generally be much more efficient to first drop them and recreate them after loading the data, like in the following:

--
-- Drop indexes, if exist, to speed up loading
--
psql -d mydb -U dbu -h localhost
DROP INDEX IF EXISTS property_sale_ca_prop_type_id;
DROP INDEX IF EXISTS property_sale_ny_prop_type_id;
DROP INDEX IF EXISTS property_sale_tx_prop_type_id;
DROP INDEX IF EXISTS property_sale_il_prop_type_id;
DROP INDEX IF EXISTS property_sale_wa_prop_type_id;
DROP INDEX IF EXISTS property_sale_fl_prop_type_id;
DROP INDEX IF EXISTS property_sale_va_prop_type_id;
DROP INDEX IF EXISTS property_sale_co_prop_type_id;
DROP INDEX IF EXISTS property_sale_oh_prop_type_id;
DROP INDEX IF EXISTS property_sale_nv_prop_type_id;
DROP INDEX IF EXISTS property_sale_or_prop_type_id;
DROP INDEX IF EXISTS property_sale_pa_prop_type_id;
DROP INDEX IF EXISTS property_sale_ut_prop_type_id;
DROP INDEX IF EXISTS property_sale_ma_prop_type_id;
DROP INDEX IF EXISTS property_sale_ct_prop_type_id;
DROP INDEX IF EXISTS property_sale_la_prop_type_id;
DROP INDEX IF EXISTS property_sale_wi_prop_type_id;
DROP INDEX IF EXISTS property_sale_wy_prop_type_id;
DROP INDEX IF EXISTS property_sale_nm_prop_type_id;
DROP INDEX IF EXISTS property_sale_nj_prop_type_id;
DROP INDEX IF EXISTS property_sale_nh_prop_type_id;
DROP INDEX IF EXISTS property_sale_mi_prop_type_id;
DROP INDEX IF EXISTS property_sale_md_prop_type_id;
DROP INDEX IF EXISTS property_sale_dc_prop_type_id;
DROP INDEX IF EXISTS property_sale_err_prop_type_id;
\q

--
-- Load data from the infile into property_sale
--
nohup psql -d mydb -U dbu -h localhost -c "\copy property_sale(state, property_id, property_type, sale_date, sale_price) from '/db/data/test/pg_partitioning_infile_1b.txt' delimiter E'\t'" 2>&1 | mail -s "pg partitioning test - loading 1 billion rows" me@mydomain.com &

psql -d mydb -U dbu -h localhost -c "select count(*) from property_sale;"
psql -d mydb -U dbu -h localhost -c "select count(*) from property_sale where state = 'NY';"

--
-- Recreate indexes (run in background)
--
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ca_prop_type_id ON property_sale_ca (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ny_prop_type_id ON property_sale_ny (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_tx_prop_type_id ON property_sale_tx (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_il_prop_type_id ON property_sale_il (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_wa_prop_type_id ON property_sale_wa (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_fl_prop_type_id ON property_sale_fl (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_va_prop_type_id ON property_sale_va (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_co_prop_type_id ON property_sale_co (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_oh_prop_type_id ON property_sale_oh (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nv_prop_type_id ON property_sale_nv (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_or_prop_type_id ON property_sale_or (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_pa_prop_type_id ON property_sale_pa (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ut_prop_type_id ON property_sale_ut (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ma_prop_type_id ON property_sale_ma (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ct_prop_type_id ON property_sale_ct (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_la_prop_type_id ON property_sale_la (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_wi_prop_type_id ON property_sale_wi (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_wy_prop_type_id ON property_sale_wy (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nm_prop_type_id ON property_sale_nm (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nj_prop_type_id ON property_sale_nj (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nh_prop_type_id ON property_sale_nh (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_mi_prop_type_id ON property_sale_mi (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_md_prop_type_id ON property_sale_md (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_dc_prop_type_id ON property_sale_dc (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_err_prop_type_id ON property_sale_err (property_type, id);" &

Query with Constraint Exclusion

Prior to querying the tables, make sure the query optimization parameter, constraint_exclusion, is enabled.

--
-- Turn on query optimization for partitioned tables
--
SET constraint_exclusion = on;

--
-- Query test
--
psql -d mydb -U dbu -h localhost -c "select count(*) from property_sale where state = 'NY' and property_type = 'Condominium';"

With constraint exclusion enabled, the query planner will be smart enough to examine query constraints to exclude scanning of those partitioned tables that don’t match the constraints. Unfortunately, though, if the constraints involve matching against non-constants like the NOW() function, the query planner won’t have enough information to filter out unwanted partitions hence won’t be able to take advantage of the optimization.

Final notes

With a suitable partitioning scheme applied to a big table, query performance can be improved by an order of magnitude. As illustrated in the above case, the entire partitioning scheme centers around the key column used for partitioning, hence it’s critical to properly plan out which key column (or combination of columns) to partition. Number of partitions should also be carefully thought out, as too few partitions might not help whereas too many partitions would create too much overhead.

Relational Database Redemption

Relational databases, such as PostgreSQL and Oracle, can be traced back to the 80’s when they became a dominant type of data management systems. Their prominence was further secured by the ANSI standardization of the domain specific language called SQL (Structured Query Language). Since then, RDBMS (relational database management system) has been the de facto component most data-centric applications would be architecturally centered around.

What happened to relational Databases?

It’s a little troubling, though, over the past 10-15 years, I’ve witnessed relational databases being sidelined from the core functionality requirement review or architectural design in many software engineering projects that involve data-centric applications. In particular, other kinds of databases would often be favored for no good reasons. And when relational databases are part of the core technology stack, thorough data model design would often be skipped and using of SQL would often be avoided in cases where it would be highly efficient.

So, why have relational databases been treated with such noticeably less preference or seriousness? I believe a couple of causes have led to the phenomenon.

Object-oriented data persistence architecture

First, there was a shift in application architecture in late 90’s when object-oriented programming began to increasingly dominate in the computing world. In particular, the backend data persistence component of object-oriented applications began to take over the heavy lifting of the database CRUD (create/read/update/delete) operations which used to reside within the database tier via SQL or procedural language PL/SQL.

Java EJB (enterprise Java bean), which was aimed to emulate data persistence and query functionality among other things, took the object-oriented programming world by storm. ORM (object-relational mapping) then further helped keep software engineers completely inside the Object world. Realizing that the initial EJB specifications were over-engineered, it later evolved into JPA (Java Persistence API) which also incorporates ORM functionality. All that doesn’t eliminate the need of relational databases, but engineering design focus has since been pulled away from the database tier and SQL has been treated as if it was irrelevant.

NoSQL databases

Then, in late 00’s came column-oriented NoSQL databases like HBase and Cassandra, which were designed to primarily handle large-scale datasets. Designed to run on scalable distributed computing platforms, these databases are great for handling Big Data at the scale that conventional relational databases would have a hard time to perform well.

Meanwhile, document-based NoSQL databases like MongoDB also emerged and have increasingly been adopted as part of the core technology stack by software engineers. These NoSQL databases have all of a sudden stole the spotlight in the database world. Relational databases were further perceptually “demoted” and SQL wouldn’t look right without a negation prefix.

Object-oriented data persistence versus SQL, PL/SQL

Just to be clear, I’m not against having the data persistence layer of the application handle the business logic of data manipulations and queries within the Object world. In fact, I think it makes perfect sense to keep data access business logic within the application tier using the same object-oriented programming paradigm, shielding software engineers from having to directly deal with things in the disparate SQL world.

Another huge benefit of using the object-oriented data persistence is that it takes advantage of any scaling mechanism provided by the application servers (especially for those on distributed computing platforms), rather than, say, relying everything on database-resident PL/SQL procedures that don’t scale well.

What I’m against, though, is that proper design and usage best practices are skipped when a relational database is used, hallucinating that the ORM would just magically handle all the data manipulations/queries of a blob of poorly structured data. In addition, while ORMs can automatically generate SQLs for a relatively simple data model, they aren’t good at coming up with optimal efficient SQLs for many sophisticated models in the real world.

NoSQL databases versus Relational databases

Another clarification point I thought I should raise is that – I love both SQL-based relational and NoSQL databases, and have adopted them as core parts of different systems in the past. I believe they have their own sweet spots as well as drawbacks, and should be adopted in accordance with the specific need in data persistence and consumption.

I’ve seen some engineering organizations flocking to the NoSQL world for valid reasons, and others just for looking cool. I’ve also seen in a couple of occasions that companies decided to roll back from a NoSQL platform to using relational databases to better address their database transaction need after realizing that their increasing data volume demand can actually be handled fine with a properly designed relational database system.

In general, if your database need leans towards data warehousing and the projected data volume is huge, NoSQL is probably a great choice; otherwise, sticking to using relational databases might be the best deal. It all boils down to specific business requirement, and these days it’s also common that both database types are simultaneously adopted to complement each other. As to what’s considered huge, I would say it warrants a NoSQL database solution when one or more tables need to house 100’s of millions or more rows of data.

Why do relational databases still matter?

The answer to whether relational databases still matter is a decisive yes:

  1. Real-world need of relational data models — A good portion of structured and inter-related data in the real world is still best represented by relational data models. While column-oriented databases excel in handling very large datasets, they aren’t designed for modeling relational data entities.
  2. Transactional CRUD operations — Partly due to NoSQL database’s fundamental design, data often need to be stored in denormalized form for performance, and that makes transactional operations difficult. On the contrary, relational database is a much more suitable model for transactional CRUD operations that many types of applications require. That, coupled with the standard SQL language for transactional CRUD makes the role of relational databases not easily replaceable.
  3. Bulk data manipulations — Besides proven a versatile powerful tool in handling transactional CRUD, SQL also excels in manipulating data in bulk without compromise in atomicity. While PL/SQL isn’t suitable for all kinds of data manipulation tasks, when used with caution it provides procedural functionality in bulk data processing or complex ETL (extract-transform-load).
  4. Improved server hardware — Improvement in server processing power and low cost of memory and storage in recent years have helped make relational databases cope with the increasing demand of high data volume. On top of that, prominent database systems are equipped with robust data sharding and clustering features that also decidedly help in scalability. Relational databases with 10’s or even 100’s of million rows of data in a table aren’t uncommon these days.

Missing skills from today’s software architects

In recent years, I’ve encountered quite a few senior software engineers/architects with advanced programming skills but poor relational data modeling/SQL knowledge. With their computing backgound I believe many of these engineers could pick up the essential knowledge without too much effort. (That being said, I should add that while commanding the relational database fundamentals is rather trivial, becoming a database guru does require some decent effort.) It’s primarily the lack of drive to sharpen their skills in the specific domain that has led to the said phenomenon.

The task of database design still largely falls on the shoulders of the software architect. Most database administrators can configure database systems and fine-tune queries at the operational level to ensure the databases are optimally run, but few possess business requirement knowledge or, in many cases, skills for database design. Suitable database design and data modeling requires intimate knowledge and understanding of business logic of the entire application that is normally in the software architect’s arena.

Even in the NoSQL world of column-oriented databases, I’ve noticed that database design skills are also largely missing. Part of NoSQL database’s signature is that data columns don’t have to be well-defined upfront and can be added later as needed. Because of that, many software architects tend to think that they have the liberty to bypass proper schema design upfront. The truth is that NoSQL databases do need proper schema design as well. For instance, in HBase, due to the by-design limitation of indexing, one needs to carefully lay out upfront what the row key is comprised of and what column families will be maintained.

Old and monolithic?

Aside from causes related to the disruptive technologies described above, some misconceptions that associate relational databases with obsolete technology or monolithic design have also helped contribute to the unwarranted negative attitude towards RDBMS.

Old != Obsolete — Relational database technology is old. Fundamentally it hasn’t changed since decades ago, whereas new computing and data persistence technology buzzwords keep popping up left and right non-stopped. Given so many emerging technologies that one wants to learn all at once, old RDBMS often gets placed at the bottom of the queue. In any case, if a technology is old but continues to excel within its domain, it isn’t obsolete.

RDBMS != Monolith — Contemporary software architects have been advocating against monolithic design. In recent years, more and more applications have been designed and built as microservices with isolated autonomous services and data locality. That’s all great stuff in the ever-evolving software engineering landscape, but when people automatically categorize an application with a high-volume relational database a monolithic system, that’s a flawed assumption.

Bottom line, as long as much of the data in the real world is still best represented in relational data models, RDBMS will have its place in the computing world.