Author Archives: Leo Cheung

PostgreSQL Table Partitioning

With the ever growing demand for data science work in recent years, PostgreSQL has gained superb popularity especially in areas where extensive geospatial/GIS (geographic information system) functionality is needed. In a previous startup venture, MySQL was initially adopted and I went through the trouble of migrating to PostgreSQL mainly because of the sophisticated geospatial features PostGIS offers.

PostgreSQL offers a lot of goodies, although it does have a few things that I wish were done differently. Most notable to me is that while its SELECT statement supports SQL-92 Standard’s JOIN syntax, its UPDATE statement would not. For instance, the following UPDATE statement would not work in PostgreSQL:

--
-- SQL-92 compliance JOIN
--
UPDATE X
INNER JOIN Y ON X.yid = Y.id
LEFT JOIN Z ON Y.zid = Z.id
SET
    X.yname = Y.name,
    X.zname = Z.name
WHERE
    X.id IN (101,102,103)
;

Partial indexing

Nevertheless, for general performance and scalability, PostgreSQL remains one of the top candidates with proven track record in the world of open source RDBMS. In scaling up a PostgreSQL database, there is a wide variety of approaches. Suitable indexing is probably one of the first things to look into. Aside from planning out proper column orders in indexes that are optimal for the frequently used queries, there is another indexing feature that PostgreSQL provides for handling large datasets.

Partial indexing allows an index to be built over a subset of a table based on a conditional expression. For instance:

--
-- Partial Index
--
CREATE INDEX ix_unshipped_orders ON orders (order_num) WHERE shipped is FALSE;

In the case of a table with large amount of rows, this feature could make an otherwise gigantic index much smaller, thus more efficient for queries against the selectively indexed data.

Scaling up with table partitioning

However, when a table grows to certain volume, say, beyond a couple of hundreds of million rows, and if periodically archiving off data from the table isn’t an option, it would still be a problem even with applicable indexing strategy. In many cases, it might be necessary to do something directly with the table structure and table partitioning is often a good solution.

There are a few approaches to partition a PostgreSQL table. Among them, partitioning by means of table inheritance is perhaps the most popular approach. A master table will be created as a template that defines the table structure. This master table will be empty whereas a number of child tables inherited from this master table will actually host the data.

The partitioning is based on a partition key which can be a column or a combination of columns. In some common use cases, the partition keys are often date-time related. For instance, a partition key could be defined in a table to partition all sales orders by months with constraint like the following:

order_date >= ‘2016-12-01 00:00:00’ AND order_date < ‘2017-01-01 00:00:00’

Other common cases include partitioning geographically, etc.

A table partitioning example

When I was with a real estate startup building an application that involves over 100 millions nationwide properties, each with multiple attributes of interest, table partitioning was employed to address the demanding data volume. Below is a simplified example of how the property sale transaction table was partitioned to maintain a billion rows of data.

First, create the master table which will serve as the template for the table structure.

--
-- Create master table
--
DROP TABLE IF EXISTS property_sale CASCADE;

CREATE TABLE property_sale (
    "id" bigserial primary key,
    "state" character varying(20) not null,
    "property_type" character varying(100),
    "sale_date" timestamp with time zone,
    "sale_price" integer
);

Next, create child tables inheriting from the master table for the individual states. For simplicity, I only set up 24 states for performance evaluation.

--
-- Create child tables (for 24 selected states)
--
CREATE TABLE property_sale_ca ( CHECK ( state = 'CA' )) INHERITS (property_sale);
CREATE TABLE property_sale_ny ( CHECK ( state = 'NY' )) INHERITS (property_sale);
CREATE TABLE property_sale_tx ( CHECK ( state = 'TX' )) INHERITS (property_sale);
CREATE TABLE property_sale_il ( CHECK ( state = 'IL' )) INHERITS (property_sale);
CREATE TABLE property_sale_wa ( CHECK ( state = 'WA' )) INHERITS (property_sale);
CREATE TABLE property_sale_fl ( CHECK ( state = 'FL' )) INHERITS (property_sale);
CREATE TABLE property_sale_va ( CHECK ( state = 'VA' )) INHERITS (property_sale);
CREATE TABLE property_sale_co ( CHECK ( state = 'CO' )) INHERITS (property_sale);
CREATE TABLE property_sale_oh ( CHECK ( state = 'OH' )) INHERITS (property_sale);
CREATE TABLE property_sale_nv ( CHECK ( state = 'NV' )) INHERITS (property_sale);
CREATE TABLE property_sale_or ( CHECK ( state = 'OR' )) INHERITS (property_sale);
CREATE TABLE property_sale_pa ( CHECK ( state = 'PA' )) INHERITS (property_sale);
CREATE TABLE property_sale_ut ( CHECK ( state = 'UT' )) INHERITS (property_sale);
CREATE TABLE property_sale_ma ( CHECK ( state = 'MA' )) INHERITS (property_sale);
CREATE TABLE property_sale_ct ( CHECK ( state = 'CT' )) INHERITS (property_sale);
CREATE TABLE property_sale_la ( CHECK ( state = 'LA' )) INHERITS (property_sale);
CREATE TABLE property_sale_wi ( CHECK ( state = 'WI' )) INHERITS (property_sale);
CREATE TABLE property_sale_wy ( CHECK ( state = 'WY' )) INHERITS (property_sale);
CREATE TABLE property_sale_nm ( CHECK ( state = 'NM' )) INHERITS (property_sale);
CREATE TABLE property_sale_nj ( CHECK ( state = 'NJ' )) INHERITS (property_sale);
CREATE TABLE property_sale_nh ( CHECK ( state = 'NH' )) INHERITS (property_sale);
CREATE TABLE property_sale_mi ( CHECK ( state = 'MI' )) INHERITS (property_sale);
CREATE TABLE property_sale_md ( CHECK ( state = 'MD' )) INHERITS (property_sale);
CREATE TABLE property_sale_dc ( CHECK ( state = 'DC' )) INHERITS (property_sale);
CREATE TABLE property_sale_err ( CHECK ( state NOT IN (
    'CA', 'NY', 'TX', 'IL', 'WA', 'FL', 'VA', 'CO', 'OH', 'NV', 'OR', 'PA', 
    'UT', 'MA', 'CT', 'LA', 'WI', 'WY', 'NM', 'NJ', 'NH', 'MI', 'MD', 'DC' ))
) INHERITS (property_sale);

Nothing magical so far, until a suitable trigger for propagating insert is put in place. The trigger essentially redirects insert requests against the master table to the corresponding child tables.

--
-- Create trigger for insert
--
DROP FUNCTION IF EXISTS fn_insert_property_sale();
CREATE OR REPLACE FUNCTION fn_insert_property_sale()
RETURNS TRIGGER AS $
BEGIN
    CASE NEW.state
        WHEN 'CA' THEN
            INSERT INTO property_sale_ca VALUES (NEW.*);
        WHEN 'NY' THEN
            INSERT INTO property_sale_ny VALUES (NEW.*);
        WHEN 'TX' THEN
            INSERT INTO property_sale_tx VALUES (NEW.*);
        WHEN 'IL' THEN
            INSERT INTO property_sale_il VALUES (NEW.*);
        WHEN 'WA' THEN
            INSERT INTO property_sale_wa VALUES (NEW.*);
        WHEN 'FL' THEN
            INSERT INTO property_sale_fl VALUES (NEW.*);
        WHEN 'VA' THEN
            INSERT INTO property_sale_va VALUES (NEW.*);
        WHEN 'CO' THEN
            INSERT INTO property_sale_co VALUES (NEW.*);
        WHEN 'OH' THEN
            INSERT INTO property_sale_oh VALUES (NEW.*);
        WHEN 'NV' THEN
            INSERT INTO property_sale_nv VALUES (NEW.*);
        WHEN 'OR' THEN
            INSERT INTO property_sale_or VALUES (NEW.*);
        WHEN 'PA' THEN
            INSERT INTO property_sale_pa VALUES (NEW.*);
        WHEN 'UT' THEN
            INSERT INTO property_sale_ut VALUES (NEW.*);
        WHEN 'MA' THEN
            INSERT INTO property_sale_ma VALUES (NEW.*);
        WHEN 'CT' THEN
            INSERT INTO property_sale_ct VALUES (NEW.*);
        WHEN 'LA' THEN
            INSERT INTO property_sale_la VALUES (NEW.*);
        WHEN 'WI' THEN
            INSERT INTO property_sale_wi VALUES (NEW.*);
        WHEN 'WY' THEN
            INSERT INTO property_sale_wy VALUES (NEW.*);
        WHEN 'NM' THEN
            INSERT INTO property_sale_nm VALUES (NEW.*);
        WHEN 'NJ' THEN
            INSERT INTO property_sale_nj VALUES (NEW.*);
        WHEN 'NH' THEN
            INSERT INTO property_sale_nh VALUES (NEW.*);
        WHEN 'MI' THEN
            INSERT INTO property_sale_mi VALUES (NEW.*);
        WHEN 'MD' THEN
            INSERT INTO property_sale_md VALUES (NEW.*);
        WHEN 'DC' THEN
            INSERT INTO property_sale_dc VALUES (NEW.*);
        ELSE
            INSERT INTO property_sale_err VALUES (NEW.*);
    END CASE;
    RETURN NULL;
END
$
LANGUAGE plpgsql;

CREATE TRIGGER tr_insert_property_sale
    BEFORE INSERT ON property_sale
    FOR EACH ROW EXECUTE PROCEDURE fn_insert_property_sale();

Let’s test inserting data into the partitioned tables via the trigger:

--
-- Check created tables, functions, triggers
--
\dt property_sale*
\df fn_*
select tgname from pg_trigger where tgname like 'tr_%';

--
-- Test insert
--
INSERT INTO property_sale ("state", "property_id", "property_type", "sale_date", "sale_price") VALUES
('CA', 1008, 'Single Family House', '1990-05-05 09:30:00', 750000),
('NY', 2505, 'Apartment', '2002-01-30 12:00:00', 800000),
('ZZ', 9999, 'Single Family House', '2012-06-28 14:00:00', 500000),
('CA', 1008, 'Condominium', '2015-11-02 16:00:00', 1200000),
('TX', 3030, 'Condominium', '2006-04-20 11:15:00', 500000)
;

A Python program for data import

Now that the master table and its child tables are functionally in place, we’re going to populate them with large-scale data for testing. First, write a simple program using Python (or any other programming/scripting language) as follows to generate simulated data in a tab-delimited file for data import:

#!/usr/bin/python
import getopt, sys
import csv
import random
import time

def usage():
    print("Usage: %s [-h] [-r rows -f file]" % sys.argv[0])
    print(" e.g.: %s -r 20000 -f /db/data/test/pg_partitioning_infile.txt" % sys.argv[0])

def randDateTime(start, end, rand):
    format = '%Y-%m-%d %H:%M:%S'
    startTime = time.mktime(time.strptime(start, format))
    endTime = time.mktime(time.strptime(end, format))
    randTime = startTime + rand * (endTime - startTime)
    return time.strftime(format, time.localtime(randTime))

if __name__ == "__main__":

    rows = 0
    txtFile = ""

    try:
        opts, args = getopt.getopt(sys.argv[1:], "hr:f:", ["help", "rows=", "file="])
    except getopt.GetoptError as err:
        usage()
        sys.exit(2)

    for okey, oval in opts:
        if okey in ("-h", "--help"):
            usage()
            sys.exit()
        elif okey in ("-r", "--rows"):
            rows = int(oval)
        elif okey in ("-f", "--file"):
            txtFile = oval
        else:
            assert False, "unhandled option"

    print("rows = %d" % rows)

    stateList = ['CA', 'NY', 'TX', 'IL', 'WA', 'FL', 'VA', 'CO', 'OH', 'NV', 'OR', 'PA', \
                 'UT', 'MA', 'CT', 'LA', 'WI', 'WY', 'NM', 'NJ', 'NH', 'MI', 'MD', 'DC']
    propTypeList = ['Single Family', 'Condominium', 'Multi (2-4 units)', 'Duplex', 'Triplex', \
                    'Quadruplex', 'Miscellaneous', 'Mobile Home', 'Residential Vacant Land']

    with open(txtFile, 'w') as f:
        w = csv.writer(f, dialect = 'excel-tab')
        rowCount = 0

        for propId in range(1000, 200000000):
            state = random.choice(stateList)
            propType = random.choice(propTypeList)
            numSales = random.randint(1, 8)
            randPrice = random.randint(250, 2500) * 1000

            saleCount = 0
            while rowCount < rows and saleCount < numSales:
                saleDate = randDateTime("1980-01-01 00:00:00", "2017-01-01 00:00:00", random.random())
                salePrice = randPrice * (1.0 + random.randint(-20, 20) / 100.0)
                salePrice = int(salePrice / 1000.0) * 1000

                w.writerow([state, propId, propType, saleDate, salePrice])
                rowCount += 1
                saleCount += 1

            if rowCount >= rows:
                break

Run the Python program to generate up to 1 billion rows of property sale data. Given the rather huge output, make sure the generated file is on a storage device with plenty of space. Since it’s going to take some time to finish the task, it would better be run in the background, perhaps along with mail notification, like the following:

--
-- Run pg_partitioning_infile.py to create tab-delimited infile
--
cd /db/app/test/
nohup python pg_partitioning_infile.py -r 1000000000 -f /db/data/test/pg_partitioning_infile_1b.txt 2>&1 | mail -s "pg partitioning infile - creating 1-billion-row infile" me@mydomain.com &

Next, load data from the generated infile into the partitioned tables using psql. In case there are indexes created for the partitioned tables, it would generally be much more efficient to first drop them and recreate them after loading the data, like in the following:

--
-- Drop indexes, if exist, to speed up loading
--
psql -d mydb -U dbu -h localhost
DROP INDEX IF EXISTS property_sale_ca_prop_type_id;
DROP INDEX IF EXISTS property_sale_ny_prop_type_id;
DROP INDEX IF EXISTS property_sale_tx_prop_type_id;
DROP INDEX IF EXISTS property_sale_il_prop_type_id;
DROP INDEX IF EXISTS property_sale_wa_prop_type_id;
DROP INDEX IF EXISTS property_sale_fl_prop_type_id;
DROP INDEX IF EXISTS property_sale_va_prop_type_id;
DROP INDEX IF EXISTS property_sale_co_prop_type_id;
DROP INDEX IF EXISTS property_sale_oh_prop_type_id;
DROP INDEX IF EXISTS property_sale_nv_prop_type_id;
DROP INDEX IF EXISTS property_sale_or_prop_type_id;
DROP INDEX IF EXISTS property_sale_pa_prop_type_id;
DROP INDEX IF EXISTS property_sale_ut_prop_type_id;
DROP INDEX IF EXISTS property_sale_ma_prop_type_id;
DROP INDEX IF EXISTS property_sale_ct_prop_type_id;
DROP INDEX IF EXISTS property_sale_la_prop_type_id;
DROP INDEX IF EXISTS property_sale_wi_prop_type_id;
DROP INDEX IF EXISTS property_sale_wy_prop_type_id;
DROP INDEX IF EXISTS property_sale_nm_prop_type_id;
DROP INDEX IF EXISTS property_sale_nj_prop_type_id;
DROP INDEX IF EXISTS property_sale_nh_prop_type_id;
DROP INDEX IF EXISTS property_sale_mi_prop_type_id;
DROP INDEX IF EXISTS property_sale_md_prop_type_id;
DROP INDEX IF EXISTS property_sale_dc_prop_type_id;
DROP INDEX IF EXISTS property_sale_err_prop_type_id;
\q

--
-- Load data from the infile into property_sale
--
nohup psql -d mydb -U dbu -h localhost -c "\copy property_sale(state, property_id, property_type, sale_date, sale_price) from '/db/data/test/pg_partitioning_infile_1b.txt' delimiter E'\t'" 2>&1 | mail -s "pg partitioning test - loading 1 billion rows" me@mydomain.com &

psql -d mydb -U dbu -h localhost -c "select count(*) from property_sale;"
psql -d mydb -U dbu -h localhost -c "select count(*) from property_sale where state = 'NY';"

--
-- Recreate indexes (run in background)
--
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ca_prop_type_id ON property_sale_ca (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ny_prop_type_id ON property_sale_ny (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_tx_prop_type_id ON property_sale_tx (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_il_prop_type_id ON property_sale_il (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_wa_prop_type_id ON property_sale_wa (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_fl_prop_type_id ON property_sale_fl (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_va_prop_type_id ON property_sale_va (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_co_prop_type_id ON property_sale_co (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_oh_prop_type_id ON property_sale_oh (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nv_prop_type_id ON property_sale_nv (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_or_prop_type_id ON property_sale_or (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_pa_prop_type_id ON property_sale_pa (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ut_prop_type_id ON property_sale_ut (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ma_prop_type_id ON property_sale_ma (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_ct_prop_type_id ON property_sale_ct (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_la_prop_type_id ON property_sale_la (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_wi_prop_type_id ON property_sale_wi (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_wy_prop_type_id ON property_sale_wy (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nm_prop_type_id ON property_sale_nm (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nj_prop_type_id ON property_sale_nj (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_nh_prop_type_id ON property_sale_nh (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_mi_prop_type_id ON property_sale_mi (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_md_prop_type_id ON property_sale_md (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_dc_prop_type_id ON property_sale_dc (property_type, id);" &
nohup psql -d mydb -U dbu -h localhost -c "CREATE INDEX property_sale_err_prop_type_id ON property_sale_err (property_type, id);" &

Query with Constraint Exclusion

Prior to querying the tables, make sure the query optimization parameter, constraint_exclusion, is enabled.

--
-- Turn on query optimization for partitioned tables
--
SET constraint_exclusion = on;

--
-- Query test
--
psql -d mydb -U dbu -h localhost -c "select count(*) from property_sale where state = 'NY' and property_type = 'Condominium';"

With constraint exclusion enabled, the query planner will be smart enough to examine query constraints to exclude scanning of those partitioned tables that don’t match the constraints. Unfortunately, though, if the constraints involve matching against non-constants like the NOW() function, the query planner won’t have enough information to filter out unwanted partitions hence won’t be able to take advantage of the optimization.

Final notes

With a suitable partitioning scheme applied to a big table, query performance can be improved by an order of magnitude. As illustrated in the above case, the entire partitioning scheme centers around the key column used for partitioning, hence it’s critical to properly plan out which key column (or combination of columns) to partition. Number of partitions should also be carefully thought out, as too few partitions might not help whereas too many partitions would create too much overhead.

Relational Database Redemption

Relational databases, such as PostgreSQL and Oracle, can be traced back to the 80’s when they became a dominant type of data management systems. Their prominence was further secured by the ANSI standardization of the domain specific language called SQL (Structured Query Language). Since then, RDBMS (relational database management system) has been the de facto component most data-centric applications would be architecturally centered around.

What happened to relational Databases?

It’s a little troubling, though, over the past 10-15 years, I’ve witnessed relational databases being sidelined from the core functionality requirement review or architectural design in many software engineering projects that involve data-centric applications. In particular, other kinds of databases would often be favored for no good reasons. And when relational databases are part of the core technology stack, thorough data model design would often be skipped and using of SQL would often be avoided in cases where it would be highly efficient.

So, why have relational databases been treated with such noticeably less preference or seriousness? I believe a couple of causes have led to the phenomenon.

Object-oriented data persistence architecture

First, there was a shift in application architecture in late 90’s when object-oriented programming began to increasingly dominate in the computing world. In particular, the backend data persistence component of object-oriented applications began to take over the heavy lifting of the database CRUD (create/read/update/delete) operations which used to reside within the database tier via SQL or procedural language PL/SQL.

Java EJB (enterprise Java bean), which was aimed to emulate data persistence and query functionality among other things, took the object-oriented programming world by storm. ORM (object-relational mapping) then further helped keep software engineers completely inside the Object world. Realizing that the initial EJB specifications were over-engineered, it later evolved into JPA (Java Persistence API) which also incorporates ORM functionality. All that doesn’t eliminate the need of relational databases, but engineering design focus has since been pulled away from the database tier and SQL has been treated as if it was irrelevant.

NoSQL databases

Then, in late 00’s came column-oriented NoSQL databases like HBase and Cassandra, which were designed to primarily handle large-scale datasets. Designed to run on scalable distributed computing platforms, these databases are great for handling Big Data at the scale that conventional relational databases would have a hard time to perform well.

Meanwhile, document-based NoSQL databases like MongoDB also emerged and have increasingly been adopted as part of the core technology stack by software engineers. These NoSQL databases have all of a sudden stole the spotlight in the database world. Relational databases were further perceptually “demoted” and SQL wouldn’t look right without a negation prefix.

Object-oriented data persistence versus SQL, PL/SQL

Just to be clear, I’m not against having the data persistence layer of the application handle the business logic of data manipulations and queries within the Object world. In fact, I think it makes perfect sense to keep data access business logic within the application tier using the same object-oriented programming paradigm, shielding software engineers from having to directly deal with things in the disparate SQL world.

Another huge benefit of using the object-oriented data persistence is that it takes advantage of any scaling mechanism provided by the application servers (especially for those on distributed computing platforms), rather than, say, relying everything on database-resident PL/SQL procedures that don’t scale well.

What I’m against, though, is that proper design and usage best practices are skipped when a relational database is used, hallucinating that the ORM would just magically handle all the data manipulations/queries of a blob of poorly structured data. In addition, while ORMs can automatically generate SQLs for a relatively simple data model, they aren’t good at coming up with optimal efficient SQLs for many sophisticated models in the real world.

NoSQL databases versus Relational databases

Another clarification point I thought I should raise is that – I love both SQL-based relational and NoSQL databases, and have adopted them as core parts of different systems in the past. I believe they have their own sweet spots as well as drawbacks, and should be adopted in accordance with the specific need in data persistence and consumption.

I’ve seen some engineering organizations flocking to the NoSQL world for valid reasons, and others just for looking cool. I’ve also seen in a couple of occasions that companies decided to roll back from a NoSQL platform to using relational databases to better address their database transaction need after realizing that their increasing data volume demand can actually be handled fine with a properly designed relational database system.

In general, if your database need leans towards data warehousing and the projected data volume is huge, NoSQL is probably a great choice; otherwise, sticking to using relational databases might be the best deal. It all boils down to specific business requirement, and these days it’s also common that both database types are simultaneously adopted to complement each other. As to what’s considered huge, I would say it warrants a NoSQL database solution when one or more tables need to house 100’s of millions or more rows of data.

Why do relational databases still matter?

The answer to whether relational databases still matter is a decisive yes:

  1. Real-world need of relational data models — A good portion of structured and inter-related data in the real world is still best represented by relational data models. While column-oriented databases excel in handling very large datasets, they aren’t designed for modeling relational data entities.
  2. Transactional CRUD operations — Partly due to NoSQL database’s fundamental design, data often need to be stored in denormalized form for performance, and that makes transactional operations difficult. On the contrary, relational database is a much more suitable model for transactional CRUD operations that many types of applications require. That, coupled with the standard SQL language for transactional CRUD makes the role of relational databases not easily replaceable.
  3. Bulk data manipulations — Besides proven a versatile powerful tool in handling transactional CRUD, SQL also excels in manipulating data in bulk without compromise in atomicity. While PL/SQL isn’t suitable for all kinds of data manipulation tasks, when used with caution it provides procedural functionality in bulk data processing or complex ETL (extract-transform-load).
  4. Improved server hardware — Improvement in server processing power and low cost of memory and storage in recent years have helped make relational databases cope with the increasing demand of high data volume. On top of that, prominent database systems are equipped with robust data sharding and clustering features that also decidedly help in scalability. Relational databases with 10’s or even 100’s of million rows of data in a table aren’t uncommon these days.

Missing skills from today’s software architects

In recent years, I’ve encountered quite a few senior software engineers/architects with advanced programming skills but poor relational data modeling/SQL knowledge. With their computing backgound I believe many of these engineers could pick up the essential knowledge without too much effort. (That being said, I should add that while commanding the relational database fundamentals is rather trivial, becoming a database guru does require some decent effort.) It’s primarily the lack of drive to sharpen their skills in the specific domain that has led to the said phenomenon.

The task of database design still largely falls on the shoulders of the software architect. Most database administrators can configure database systems and fine-tune queries at the operational level to ensure the databases are optimally run, but few possess business requirement knowledge or, in many cases, skills for database design. Suitable database design and data modeling requires intimate knowledge and understanding of business logic of the entire application that is normally in the software architect’s arena.

Even in the NoSQL world of column-oriented databases, I’ve noticed that database design skills are also largely missing. Part of NoSQL database’s signature is that data columns don’t have to be well-defined upfront and can be added later as needed. Because of that, many software architects tend to think that they have the liberty to bypass proper schema design upfront. The truth is that NoSQL databases do need proper schema design as well. For instance, in HBase, due to the by-design limitation of indexing, one needs to carefully lay out upfront what the row key is comprised of and what column families will be maintained.

Old and monolithic?

Aside from causes related to the disruptive technologies described above, some misconceptions that associate relational databases with obsolete technology or monolithic design have also helped contribute to the unwarranted negative attitude towards RDBMS.

Old != Obsolete — Relational database technology is old. Fundamentally it hasn’t changed since decades ago, whereas new computing and data persistence technology buzzwords keep popping up left and right non-stopped. Given so many emerging technologies that one wants to learn all at once, old RDBMS often gets placed at the bottom of the queue. In any case, if a technology is old but continues to excel within its domain, it isn’t obsolete.

RDBMS != Monolith — Contemporary software architects have been advocating against monolithic design. In recent years, more and more applications have been designed and built as microservices with isolated autonomous services and data locality. That’s all great stuff in the ever-evolving software engineering landscape, but when people automatically categorize an application with a high-volume relational database a monolithic system, that’s a flawed assumption.

Bottom line, as long as much of the data in the real world is still best represented in relational data models, RDBMS will have its place in the computing world.

Startup Culture 2.0

Startup has been a household term since the early/mid 90’s when the World Wide Web (now a nostalgic term) began to take the world by storm. Triggered by the popular graphical browser, Mosaic, the blossoming of the Web all of a sudden opened up all sorts of business opportunities, attracting entrepreneurs to try capitalize the newly born eye-catching medium.

A historically well known place for technology entrepreneurship, the Silicon Valley (or more precisely, San Francisco Bay Area) became an even hotter spot for entrepreneurs to swamp in. Many of these entrepreneurs were young energetic college graduates (or drop-outs) in some science/engineering disciplines who were well equipped to quickly learn and apply new things in the computing area. They generally had a fast-paced work style with a can-do spirit. Along with the youthful work-hard play-hard attitude, the so-called startup culture was born. Sun Microsystems was probably a great representative of companies embracing the very culture back in the dot-com era.

So, that was a brief history, admittedly unofficial, of the uprising of startup culture 1.0.

Setting up an open-space engineering room

Setting up an engineering workspace

Version 2.0

This isn’t another ex-dot-commer glorifying the good old startup culture in the dot-com days that later degenerated into a less commendable version observed today. Rather, it’s just my observation over recent years the gradual, subtle changes to the so-called startup culture.

Heading into startup culture 2.0 (an obviously arbitrary version number), besides an emphasis of fast-paced and can-do, along came a number of phenomenons including long hours, open-space and the Agile “movement”.

Long hours

In 1.0, we saw a lot of startup technologists voluntarily working long hours in the office. Glorified stories of techies literally camping in the office live on. Company management nowadays still believe (or want to believe) that software engineers working long hours is a signature of startup culture.

In reality, working long-hours is no longer a “voluntary” phenomenon, if it ever was. Why is that? I believe one of the reasons is that a good portion of the techies in startups today are veterans who prefer a work-life balance with a work schedule that focuses on productive hours rather than number of hours. Another reason is that working remote is now a more feasible solution due to improved Internet connectivity at home, thus diminishing the need to stay in the office for network access.

The fact is that serious software engineering requires serious brain-work. One can only deliver a certain number of hours of quality work on any given day in a productive manner. Forcing engineers to pull long hours in the office beyond the normal productive limit might just result in higher code quantity but lower code quality. Worse yet, genuine software engineering enthusiasts tend to contribute bonus work at some ad-hoc high-creativity moments outside of office hours, but the forced long hours will likely kill any stamina or incentive left to do so.

Flexibility in work hours and locale

Back in 1.0, general Internet connection speed was slow. It was common for a good-sized company with nationwide offices to share a T1 line as their Internet backbone, whereas today many residential consumers use connections at an order of magnitude faster than a T1. So, in order to carry out productive work back then, people had to go to the office, thus oftentimes you could find software engineers literally camping in the office.

Given the vastly improved residential Internet infrastructure today, much of the engineering work that used to be doable only in the office in the past can be done at home. So, if a software engineer already has regular office presence, there is little to no reason to work long hours in the office. In fact, other than pre-scheduled group meetings and white-boarding sessions, engineers really don’t have to stay in the office all the time, especially for those who have to endure bad commute.

Open office space

Open-plan office has been advocated since the middle of the 1.0 era. Common office cubes used to be 5-1/2 feet or taller in the old days. People later began to explore opening up a visually-cluttered office space by cutting down about a foot from the cube wall, allowing individuals to talk face to face when standing up while keeping some privacy when sitting down. Personally, I think that’s the optimal setting. But then in recent years, lots of startups adopted cubes with walls further lowered or completely removed, essentially enforcing a surround-sound “multicast” communication protocol all the time for individuals. In addition, the vast open view would also ensure constant visual distractions.

Bear in mind software engineers need good chunks of solitary time to conduct their coding work. Such a multicast plus visually distracting environment isn’t going to provide a productive environment for them. It’s understandable for a cash-strapped early-stage startup to adopt a temporary economical seating solution like that, but I’ve seen many well-funded companies building out such workspace as their long-term work environment.

Agile

Virtually every software engineering organization is practicing some sort of Agile process. I like certain core Agile practices like 2-week development sprint, daily 15-minute scrum, and continuous integration, which I think are applicable to many software development projects. But I’m against mechanically adopting everything advocated in the Agile methodology.

Which aspects of the Agile process are to be adopted for an engineering organization should be evaluated, carried out and adjusted in accordance with engineers skillset, project type and operation environment. The primary goal should be for efficiency and productivity, not because Agile sounds cool.

Insecurity and distrust?

With typically limited budget and tight timeline in mind, management tend to be nervous about whether their engineering hires deliver work as best as they could. Long hours in office would make believe that things are moving along at above the average speed. Open space further eases the anxiety through seeing-is-believing. And frequent sprints and daily scrums ensure maximum work output and microscopic measurement of individuals performance.

If those are the motives observed by the software engineers, most likely they won’t be happy and the most competent ones will be the first to flee. Nor will the management be happy when they don’t see the expected high productivity and find it hard to retain top engineers. The end result is that despite all the rally of fun, energetic startup culture on the company’s hiring web page, people hardly feel fun or energy there.

What can be done better?

Management:

  1. Give your staff benefit of doubt — It’s hard to let go of the doubt about whether people are working as hard as expected, but pushing for long hours in the office and keeping them visually exposed at their desks only send a signal of distrust and insecurity. By pushing for long hours in the office, management are in essence commodifying software engineering work to become some hourly-paid kind of mechanical work. It’ll only backfire and may just result in superficial punch-clock office attendance with low productivity. I would also recommend making the work hours as flexible as possible. On working remote, a pre-agreed arrangement of telecommuting would go a long way for those who must endure long commute. People with enough self-respect would appreciate demonstrated trust from the management and it would make them enjoy their job more, thus produce better work.
  2. Work healthy — Work hard and play hard is almost a synonym of startup culture that we believe fun is a critical aspect in a startup environment. But throwing in a ping pong or foosball table would not automatically spawn a fun environment. In building out a work environment, I would argue that “work healthy” perhaps should replace “fun” as the primary initiative. I think providing a healthy working environment will lead to happier staff, better productivity, and fun will come as a bonus. Common ways to achieve that include suitable office plan, ergonomic office furniture, natural lighting, facility for exercise, workout subsidy programs, healthy snacks, or even a room for taking naps. Speaking of naps, I think it’s worth serious consideration to embrace it as part of the culture. Evidently, a 15-30 minutes of nap after lunch can do magic in refreshing one’s mood and productivity for the remaining half of the day.
  3. Adopt Agile with agility — Take only what best suit your staff’s skillset, project type and operational environment. Stick to the primary goal of better productivity and efficiency, and add/remove Agile practices as needed. It’s also important that you regularly communicate with the staff for feedback and improvement, and make adaptive changes to the practices if necessary.
  4. Product development feedback — Despite all the development methodologies with fancy names, there is often a disconnect between actual engineering progress and product development plan. A common tactic is to assemble a unfulfillable aggressive development plan to try push for maximum engineering output. Unfortunately, such practice often results in disrespect to development timelines or inferior product quality with a ballooned overdue tech debt. A better approach would be to maintain an effective feedback loop that constantly provides data (including actual progress estimates, tech debt clearance needs, etc) between product and engineering staff. The feedback allows product development staff to proactively plan out future feature sets that engineers can more realistically commit to delivering.
  5. Lead by example — Too often do we see management handing down a set of rules to the staff while condescendingly assuming the rules don’t apply to themselves. It’s almost certain such rules will at best be followed superficially. Another commonly seen phenomenon is that management rally to create a culture which conflicts in many ways with their actual belief and style. They do it just because they were told they must create a culture to run a startup operation, but they ignore the fact that culture can only be built and fostered with themselves genuinely being a part of it. It cannot be fabricated.

Individuals:

  1. Honor the honor system — There may be various reasons contributing to the commonly seen distrust by the management. Unsurprisingly, one of them comes directly from individuals who abuse some of the employee benefits meant to be used on discretion. Perhaps the most common case is claiming the need for work-from-home with made-up reasons or without actually putting in the hours to work. Well, you can’t blame people’s distrust in you unless you first honor the honor system . For instance, when you do work from home, stick to the actual meaning of work-from-home. Also, making your availability known to those who need to work closely with you would be helpful. One effective way, especially for a relatively small team, would be to have a shared group calendar designated for showing up-to-date team members availability.
  2. Self discipline — Again, using work-from-home as an example, one needs to have the self-discipline to actually put in decent amount of hours to work. It’s easy to be distracted, for instance by your family members, when working at home, but it’s your own responsibility to make necessary arrangement to minimize any expected distractions. It’s also your obligation to make it clear to your teammates in advance when you will be unavailable for scheduled appointments or what-not.
  3. Reasonable work priority — For unplanned urgent family matters, no one will complain if you drop everything to take care of them. However, that doesn’t necessarily justify you should frequently compromise your attendance to work for all sorts of personal events, unless that’s a pre-agreed work schedule. Bottom line, if your career matters to you, you shouldn’t place your job responsibility way down your priority list from routine personal/family matters.
  4. Active participation — Most software engineers hate meetings, feeling that they consume too much of their time which could otherwise be used for actually building products. I think if the host and the participants are well-prepared for the meeting, it’ll successfully serve its purpose (e.g. information sharing, brainstorming, team building, etc) with minimum negative feeling. A unprepared participant attending a meeting carrying a feed-me mindset will likely feel the meeting time-wasting. With some preparation, chances are that you will be a lot more engaged in the discussion and be able to provide informed input. Such active participation can stimulate collective creativity and foster a culture of “best ideas win”.
  5. Keep upgrading yourself — This may sound off-topic, but keeping yourself abreast of knowledge and best-practices in the very area of your core job responsibilities does help shape the team culture. Constant self-improvement will naturally boost up one’s confidence in their own domain which, in turn, facilitates efficient knowledge exchange and stimulates healthy competition. All that helps promote a high-efficiency no-nonsense culture. The competitive aspect presents healthy challenge to individuals, as long as excessive egos don’t get in the way. As a side note on “upgrading”, between breath and depth I would always lean toward depth. These days it’s too easy to claim familiarity of all sorts of robust frameworks and libraries on the surface, but the most wanted technologists are often the ones who demonstrated in-depth knowledge, say, down to the code level of a software library.

Final thoughts

Startup culture 1.0 left us a signature work style many aspire to embrace. It has evolved over the years into a more contemporary 2.0 that better suits modern software development models in various changeable competitive spaces. But it’s important we don’t superficially take the surface values of all the hyped-up buzzwords and mechanically apply them. The various cultural aspects should be selectively adopted and adjusted in accordance with the team’s strength and weaknesses, project type, etc. More importantly, whatever embraced should never be driven by distrust or insecurity.