Monthly Archives: February 2017

PostgreSQL Table Partitioning

With the ever growing demand for data science work in recent years, PostgreSQL has gained superb popularity especially in areas where extensive geospatial/GIS (geographic information system) functionality is needed. In a previous startup venture, MySQL was initially adopted and I went through the trouble of migrating to PostgreSQL mainly because of the sophisticated geospatial features PostGIS offers.

PostgreSQL offers a lot of goodies, although it does have a few things that I wish were done differently. Most notable to me is that while its SELECT statement supports SQL-92 Standard’s JOIN syntax, its UPDATE statement would not. For instance, the following UPDATE statement would not work in PostgreSQL:

Partial indexing

Nevertheless, for general performance and scalability, PostgreSQL remains one of the top candidates with proven track record in the world of open source RDBMS. In scaling up a PostgreSQL database, there is a wide variety of approaches. Suitable indexing is probably one of the first things to look into. Aside from planning out proper column orders in indexes that are optimal for the frequently used queries, there is another indexing feature that PostgreSQL provides for handling large datasets.

Partial indexing allows an index to be built over a subset of a table based on a conditional expression. For instance:

In the case of a table with large amount of rows, this feature could make an otherwise gigantic index much smaller, thus more efficient for queries against the selectively indexed data.

Scaling up with table partitioning

However, when a table grows to certain volume, say, beyond a couple of hundreds of million rows, and if periodically archiving off data from the table isn’t an option, it would still be a problem even with applicable indexing strategy. In many cases, it might be necessary to do something directly with the table structure and table partitioning is often a good solution.

There are a few approaches to partition a PostgreSQL table. Among them, partitioning by means of table inheritance is perhaps the most popular approach. A master table will be created as a template that defines the table structure. This master table will be empty whereas a number of child tables inherited from this master table will actually host the data.

The partitioning is based on a partition key which can be a column or a combination of columns. In some common use cases, the partition keys are often date-time related. For instance, a partition key could be defined in a table to partition all sales orders by months with constraint like the following:

order_date >= ‘2016-12-01 00:00:00’ AND order_date < ‘2017-01-01 00:00:00’

Other common cases include partitioning geographically, etc.

A table partitioning example

When I was with a real estate startup building an application that involves over 100 millions nationwide properties, each with multiple attributes of interest, table partitioning was employed to address the demanding data volume. Below is a simplified example of how the property sale transaction table was partitioned to maintain a billion rows of data.

First, create the master table which will serve as the template for the table structure.

Next, create child tables inheriting from the master table for the individual states. For simplicity, I only set up 24 states for performance evaluation.

Nothing magical so far, until a suitable trigger for propagating insert is put in place. The trigger essentially redirects insert requests against the master table to the corresponding child tables.

Let’s test inserting data into the partitioned tables via the trigger:

A Python program for data import

Now that the master table and its child tables are functionally in place, we’re going to populate them with large-scale data for testing. First, write a simple program using Python (or any other programming/scripting language) as follows to generate simulated data in a tab-delimited file for data import:

Run the Python program to generate up to 1 billion rows of property sale data. Given the rather huge output, make sure the generated file is on a storage device with plenty of space. Since it’s going to take some time to finish the task, it would better be run in the background, perhaps along with mail notification, like the following:

Next, load data from the generated infile into the partitioned tables using psql. In case there are indexes created for the partitioned tables, it would generally be much more efficient to first drop them and recreate them after loading the data, like in the following:

Query with Constraint Exclusion

Prior to querying the tables, make sure the query optimization parameter, constraint_exclusion, is enabled.

With constraint exclusion enabled, the query planner will be smart enough to examine query constraints to exclude scanning of those partitioned tables that don’t match the constraints. Unfortunately, though, if the constraints involve matching against non-constants like the NOW() function, the query planner won’t have enough information to filter out unwanted partitions hence won’t be able to take advantage of the optimization.

Final notes

With a suitable partitioning scheme applied to a big table, query performance can be improved by an order of magnitude. As illustrated in the above case, the entire partitioning scheme centers around the key column used for partitioning, hence it’s critical to properly plan out which key column (or combination of columns) to partition. Number of partitions should also be carefully thought out, as too few partitions might not help whereas too many partitions would create too much overhead.