Timescale Logo

PostgreSQL Join Type Theory

How does the PostgreSQL parser chooses which JOIN types to use? Join us for a session on JOIN theory.

In a previous article, we mentioned that PostgreSQL came about because some mathematicians wanted to map set theory to a file system. There is a lot to explain about how that ended up working. That explanation is still relevant today to understand how the PostgreSQL parser picks a join method or join types. 

This article will go over the high-level strategy of the PostgreSQL parser for choosing join types.  You can look forward to some subsequent articles that explain in more detail where and when different types of joins are helpful and provide the most efficiency.

If you don't know what set theory is or how basic joins map to set theory, there are some really good articles already out there. We won't bother to reproduce work that has been well done in the past. This article will explain in a bit more practical detail how the query planner deals with the complexity of SQL join theory.

How a Query Planner Works 

The basic working of a database planner is not as complex as you might think. The objective is to take apart an SQL statement and turn it into a list of functions that will produce the requested result.

Let's take a look at a fairly simple statement that gives us most of the elements that the parser uses:

SELECT fields, cols, tuples, list, item
FROM relation
INNER JOIN _data
ON (relation.dataid) = (data.id)
WHERE criteria = 1 and more_criteria = 4

In this statement, we have a section that is called the "selection list":

SELECT fields, cols, tuples, list, item

This is the list of items to retrieve from the physical data storage.

More importantly to our join types discussion, we have the "scan list":

FROM relation
INNER JOIN _data

This tells the parser which storage the selection list comes from.

Then we have the "qualifications list," which is:

ON (relation.dataid) = (data.id)
WHERE criteria = 1 and more_criteria = 4

The qualifications list narrows the search to specific data the caller is interested in.

Each entity in the "scan list" is called a "scan node." When we look at this from an execution point of view, there are several ways in which the parser can retrieve the data the user requested (the selection list) from the scan node(s):

1. Retrieve all of the data and apply the criteria in memory.

2. Scan the tables while applying the criteria directly to each row.

3. Use an index to retrieve only the requested data.

4. Scan the table while using the index to eliminate rows.

5. Index-only scan (requires the selection list to exist in the index).

6. Build a hash of the criteria, then scan the data and apply the hash in memory.

7. Ways I forgot to mention...

8. More ways being thought up by developers all the time...

And now, we get to the magic related to our article title. This is where the parser has to decide on the most efficient way to relate the data together based on the qualifications.

ON (relation.dataid) = (data.id)

It does this by picking a join type, an algorithm written in C that performs the checking required to see if the data is indeed related in the way the user requested.

Generally, we think of these criteria as being equality joins. That is the most common way data is related in a basic relational model—but it is by far not the only way to structure a relationship. The data may fall within a range, a list, a distribution, a non-equality, or just let everything match everything else (Cartesian).

For the sake of simplicity and brevity, we will concern ourselves here with joins based on equality. In further articles, we'll tackle some of the more complex types of joins.

Join Types

There are four basic join types in PostgreSQL. That is, four basic algorithms to match data from one set to another.

1. Nested Loop

2. Hash

3. Subselect

4. Merge

These functions are listed in pg_catalog.pg_proc. This is the system table where the implementations of all of the functions of PostgreSQL reside. The query parser is largely a strategy-picking mechanism that maps your query to the functions in this table.

Each of these computational strategies for joining tables has merits and drawbacks. We have often been asked which method is "best."  If there were a clear answer to that question, the "worst" ones would be removed from the list of options. Why would the PostgreSQL Global Development Group (PGDG) bother to maintain "bad" options?

A better question would be, "Which is best for this case?". We will attempt to make some recommendations in future articles about where each type of join is appropriate. For now, let's just look at how each join is called by the parser.

  • Nested Loop: named by the most basic implementation using two for loops (typically for i, for j).  At its most basic, it is an exhaustive search of all rows.

  • Hash: make a lookup table and use it to match multiple criteria at once.

  • Subselect: look in a secondary table for a value to match the primary table.

  • Merge: zipper two tables together by common keys that are previously ordered.

The Sherlock Parser

The PostgreSQL query planner will produce an exhaustive list of all combinations of scan types and join types that *could* fulfill the query request. This can be a very extensive list. It will then eliminate the most expensive items from the list until only one path to accomplish the query remains.  

We call this a "least cost optimizer." The idea is that whatever is left over, however unlikely, is the truth.

Let's take a look at just how complex this join operation is. For starters, there is no guarantee which table in the scan list comes first in the algorithm. The table that comes first in the evaluation is called the "driving" table.  

You could have:

digraph "G" {
    node [shape=rectangle label="SCAN\nNode"]
    a [label="JOIN\nNode"]
    b [label="SCAN\nNode\n'relation'"]
    c [color=blue style=filled label="SCAN\nNode\n'data'"]
    a -> b;
    a -> c;
}

Join type theory diagram

or

digraph "G" {
    node [shape=rectangle label="SCAN\nNode"]
    a [label="JOIN\nNode"]
    b [color=blue style=filled label="SCAN\nNode\n'relation'"]
    c [label="SCAN\nNode\n'data'"]
    a -> b;
    a -> c;
}

Join type theory diagram

This doesn't seem so significant until you realize that this "driving table" concept extends to any number of relations. So, the join order complexity is rising at a rate of $N!$.

In practice, implementing the join strategy (which is at the heart of database query planning) is one of the most expensive things the query parser has to deal with. You will see the effect of this later as we begin to look at EXPLAIN plans of the individual join types.

The more tables that are involved, the higher the complexity. This is why PostgreSQL has a limit (by default 13 scan nodes) of how many relations can be involved in a join before the planner goes into "Genetic Query Optimizer" (GEQO) mode.   

In this mode, the relations are only evaluated from left to right and only once. This dramatically reduces the number of plans but also eliminates some very good plans without ever really evaluating them. If you have ever created a very complex query that suddenly performs miserably when adding one more table, chances are you have invoked the wrath of the GEQO.

geqo=on
geqo_threshold=12

The default join in PostgreSQL is the INNER JOIN. OUTER JOIN must be specified if you want all elements from a set. A semi-join can be accomplished with an EXISTS clause and an anti-join with EXCEPT, NOT IN, or <> (not equals).

The parser also performs a few other permutations before creating the list of possible plans.  This is done by the "query rewriter," which is a component that attempts to rewrite the SQL based on equivalent statements. 

Some of the things that could be affected by the query re-writer are:

1. A subselect may also be expressed as a join

2. A NOT IN (list) may also be expressed as <> ANY(list)

3. SQL functions get pulled in line to the calling SQL

4. And a lot more

They can affect the join selection strategy by turning one type of join into a completely different type of join.

The EXPLAIN Plan

Most of the process of picking a join strategy is invisible to the caller. Using the EXPLAIN keyword, we can only see the winner of the least cost optimization process. All of the alternative plans of execution have already been whittled away from the plan list. We can affect the plan by manually rewriting the query in different forms, which can influence the parser to make different planning decisions.

PostgreSQL does not implement query hints, and there is quite a bit of inertia in the PGDG to prevent that from ever happening. Query hints are comments or statements that intentionally affect the query parser without changing the text of the query.  

In other database systems, they are used to intentionally select specific indexes, evaluations, and (apropos of this article) join types. The PGDG eschews this idea for several reasons. For starters, the performance of indexes and joins may change in the field.  

At specific data sizes, a nested loop may be the appropriate strategy. As the data grows, this could change positively to a hash join. Indexes and tables become bloated over time due to multi-version concurrency control (MVCC) semantics.

An index that worked perfectly in design may not be so perfect in the field. Even when the user improves query performance using a specific technique at design time, there is no guarantee that the technique will continue to be the best choice in production. And it generally won't.

There is also a philosophical argument against query hints. That is, a query parser is specifically designed to turn declarative language into imperative processes. The whole idea is that you declare what you want, and allow the engine to decide the best method to provide the specified result.

"Influencing" the parser is a direct betrayal of that system. It makes any future improvements to the system no longer apply to your case and defeats any improvements from the query planner itself.

Not (So Much) Conclusions

This information is provided to give you something of a background for further articles about join theory and why specific joins are selected. The eventual objective is to understand why the query planner made particular choices. To get to that objective, it helps to know how the query parser actually performs the task of making a choice.

In short: The query is hacked into pieces that are mapped to functions in a list. The highest-cost functions are trimmed from the list. What is left is expressed as an execution plan. The execution plan is sent to the executor to be performed, and the results are sent back to the caller.

Stay tuned for more articles that go more depth about the uses and misuses of the join strategies.

Read More

1. This Stackoverflow Post went completely bonkers over a simple question: tsql - LEFT JOIN vs. LEFT OUTER JOIN in SQL Server - Stack Overflow

2. This blog post explains set theory fairly well: Set Theory: the Method To Database Madness | by Vaidehi Joshi | basecs | Medium

3. A YouTube video, if you don't like images:

Timescale Logo

Subscribe to the Timescale Newsletter

By submitting, I acknowledge Timescale’s Privacy Policy
2024 © Timescale Inc. All rights reserved.