|
| 1 | +# Tabular data |
| 2 | + |
| 3 | +Tabular data is data stored in tables. While that's a bit of a recursive definition, tabular data is any kind for format that defines rows and columns, and is the most common type of data storage mechanism. Examples of tabular data include things like spreadsheets, database tables, CSV files, and Pandas dataframes. |
| 4 | + |
| 5 | +Storing and accessing tabular data is a subject of decades of studies, and is the core purpose of many database systems. PostgreSQL has been leading the charge on optimal tabular storage for a while and remains today one of the most popular and effective ways to store, organize and retrieve this kind of data. |
| 6 | + |
| 7 | +### Creating tables |
| 8 | + |
| 9 | +Postgres makes it really easy to create and use tables. If you're looking to use PostgresML for a supervised learning project, creating a table will be very similar to a Pandas dataframe, except it will be durable and easily accessible for as long as the database exists. |
| 10 | + |
| 11 | +For the rest of this guide, we'll take the [USA House Prices](https://www.kaggle.com/code/fatmakursun/supervised-unsupervised-learning-examples/) dataset from Kaggle, store it in Postgres and query it for basic statistics. The dataset has seven (7) columns and 5,000 rows: |
| 12 | + |
| 13 | + |
| 14 | + |
| 15 | +| Column | Data type | Postgres data type | |
| 16 | +| ---------------------------- | --------- | ------------------ | |
| 17 | +| Avg. Area Income | Float | REAL | |
| 18 | +| Avg. Area House Age | Float | REAL | |
| 19 | +| Avg. Area Number of Rooms | Float | REAL | |
| 20 | +| Avg. Area Number of Bedrooms | Float | REAL | |
| 21 | +| Area Population | Float | REAL | |
| 22 | +| Price | Float | REAL | |
| 23 | +| Address | String | VARCHAR | |
| 24 | + |
| 25 | +Once we know the column names and data types, the Postgres table definition almost writes itself: |
| 26 | + |
| 27 | +```plsql |
| 28 | +CREATE TABLE usa_house_prices ( |
| 29 | + "Avg. Area Income" REAL NOT NULL, |
| 30 | + "Avg. Area House Age" REAL NOT NULL, |
| 31 | + "Avg. Area Number of Rooms" REAL NOT NULL, |
| 32 | + "Avg. Area Number of Bedrooms" REAL NOT NULL, |
| 33 | + "Area Population" REAL NOT NULL, |
| 34 | + "Price" REAL NOT NULL, |
| 35 | + "Address" VARCHAR NOT NULL |
| 36 | +); |
| 37 | +``` |
| 38 | + |
| 39 | +The column names are double quoted because they contain special characters like `.` and space, which can be interpreted to be part of the SQL syntax. Generally speaking, it's good practice to double quote all entity names when using them in a PostgreSQL query, although most of the time it's not needed. |
| 40 | + |
| 41 | +If you run this using `psql`, you'll get something like this: |
| 42 | + |
| 43 | +``` |
| 44 | +postgresml=# CREATE TABLE usa_house_prices ( |
| 45 | + "Avg. Area Income" REAL NOT NULL, |
| 46 | + "Avg. Area House Age" REAL NOT NULL, |
| 47 | + "Avg. Area Number of Rooms" REAL NOT NULL, |
| 48 | + "Avg. Area Number of Bedrooms" REAL NOT NULL, |
| 49 | + "Area Population" REAL NOT NULL, |
| 50 | + "Price" REAL NOT NULL, |
| 51 | + "Address" VARCHAR NOT NULL |
| 52 | +); |
| 53 | +CREATE TABLE |
| 54 | +postgresml=# |
| 55 | +``` |
| 56 | + |
| 57 | +### Ingesting data |
| 58 | + |
| 59 | +Right now the table is empty and that's a bit boring. Let's import the USA House Prices dataset into it using one of the easiest and fastest way to do so in Postgres: using `COPY`. |
| 60 | + |
| 61 | +If you're like me and prefer to use the terminal, you can open up `psql` and ingest the dataset like this: |
| 62 | + |
| 63 | +``` |
| 64 | +postgresml=# \copy usa_house_prices FROM 'USA_Housing.csv' CSV HEADER; |
| 65 | +COPY 5000 |
| 66 | +``` |
| 67 | + |
| 68 | +As expected, Postgres copied all 5,000 rows into the `usa_house_prices` table. `COPY` accepts CSV, text, and Postgres binary formats, but CSV is definitely the most common. |
| 69 | + |
| 70 | +You may have noticed that we used the `\copy` command in the terminal, not `COPY`. The `COPY` command actually comes in two forms: `\copy` which is a `psql` command that performs a local system to remote database server copy, and `COPY` which is more commonly used in applications. If you're writing your own application to ingest data into Postgres, you'll be using `COPY`. |
| 71 | + |
| 72 | +### Querying data |
| 73 | + |
| 74 | +Querying data stored in tables is what this is all about. After all, just storing data isn't particularly interesting or useful. Postgres has one of the most comprehensive and powerful querying languages of all data storage systems we've worked with so, for our example, we won't have any trouble calculating some statistics to understand our data better. |
| 75 | + |
| 76 | +Let's compute some basic statistics on the "Avg. Area Income" column using SQL: |
| 77 | + |
| 78 | +```sql |
| 79 | +SELECT |
| 80 | + count(*), |
| 81 | + avg("Avg. Area Income"), |
| 82 | + max("Avg. Area Income"), |
| 83 | + min("Avg. Area Income"), |
| 84 | + percentile_cont(0.75) |
| 85 | + WITHIN GROUP (ORDER BY "Avg. Area Income") AS percentile_75, |
| 86 | + stddev("Avg. Area Income") |
| 87 | +FROM usa_house_prices; |
| 88 | +``` |
| 89 | + |
| 90 | +which produces exactly what we want: |
| 91 | + |
| 92 | +``` |
| 93 | + count | avg | max | min | percentile_75 | stddev |
| 94 | +-------+-------------------+-----------+----------+----------------+------------------- |
| 95 | + 5000 | 68583.10897773437 | 107701.75 | 17796.63 | 75783.33984375 | 10657.99120344229 |
| 96 | +``` |
| 97 | + |
| 98 | +The SQL language is very expressive and allows to select, filter and aggregate any number of columns from any number of tables with a single query. |
| 99 | + |
| 100 | +### Adding more data |
| 101 | + |
| 102 | +Because databases store data in perpetuity, adding more data to Postgres can take several forms. The simplest and most commonly used way to add data is to just insert it into a table that we already have. Using the USA House Prices example, we can add a new row into the table with just one query: |
| 103 | + |
| 104 | +```sql |
| 105 | +INSERT INTO usa_house_prices ( |
| 106 | + "Avg. Area Income", |
| 107 | + "Avg. Area House Age", |
| 108 | + "Avg. Area Number of Rooms", |
| 109 | + "Avg. Area Number of Bedrooms", |
| 110 | + "Area Population", |
| 111 | + "Price", |
| 112 | + "Address" |
| 113 | +) VALUES ( |
| 114 | + 199778.0, |
| 115 | + 43.0, |
| 116 | + 3.0, |
| 117 | + 2.0, |
| 118 | + 57856.0, |
| 119 | + 5000000000.0, |
| 120 | + '1 Infinite Loop, Cupertino, California' |
| 121 | +); |
| 122 | +``` |
| 123 | + |
| 124 | +Another way to add more data to a table is to run `COPY` again with a different CSV as the source. Many ETL pipelines from places like Snowflake or Redshift split their output into multiple CSVs, which can be individually imported into Postgres using multiple `COPY` statements. |
| 125 | + |
| 126 | +Adding rows is pretty simple, but now that our dataset is changing, we should explore some tools to help us protect it against bad values. |
| 127 | + |
| 128 | +### Data integrity |
| 129 | + |
| 130 | +Databases store very important data and they were built with many safety features to protect that data from common errors. In machine learning, one of the most common errors is data duplication, i.e. having the same row appear in the a table twice. Postgres can easily protect us against this with unique indexes. |
| 131 | + |
| 132 | +Looking at the USA House Price dataset, we can find its natural key pretty easily. Since most columns are aggregates, the only column that seems unique is the "Address". After all, there should never be more than one house at a single address, not for sale anyway. |
| 133 | + |
| 134 | +To ensure that our dataset reflects this, let's add a unique index to our table. To do so, we can use this SQL query: |
| 135 | + |
| 136 | +```sql |
| 137 | +CREATE UNIQUE INDEX ON usa_house_prices USING btree("Address"); |
| 138 | +``` |
| 139 | + |
| 140 | +Postgres scans the whole table, ensures there are no duplicates in the "Address" column and creates an index on that column using the B-Tree algorithm. |
| 141 | + |
| 142 | +If we now attempt to insert the same row again, we'll get an error: |
| 143 | + |
| 144 | +``` |
| 145 | +ERROR: duplicate key value violates unique constraint "usa_house_prices_Address_idx" |
| 146 | +DETAIL: Key ("Address")=(1 Infinite Loop, Cupertino, California) already exists. |
| 147 | +``` |
| 148 | + |
| 149 | +Postgres supports many more indexing algorithms, namely GiST, BRIN, GIN, and Hash. Many extensions, for example `pgvector`, implement their own index types like HNSW and IVFFlat, to efficiently search and retrieve specialized values. We explore those in our guide about [Vectors](vectors.md). |
| 150 | + |
| 151 | +### Accelerating recall |
| 152 | + |
| 153 | +Once the dataset gets large enough, and we're talking millions of rows, it's no longer practical to query the table directly. The amount of data Postgres has to scan to return a result becomes quite large and queries become slow. To help with that, tables should have indexes that order and organize commonly accessed columns. Scanning a B-Tree index can be done in _O(log n)_ time, which is orders of magnitude faster than the _O(n)_ full table search. |
| 154 | + |
| 155 | +#### Querying an index |
| 156 | + |
| 157 | +Postgres automatically uses indexes when possible in order to accelerate recall. Using our example above, we can query data using the "Address" column and we can do so very quickly by using the unique index we created. |
| 158 | + |
| 159 | +```sql |
| 160 | +SELECT |
| 161 | + "Avg. Area House Age", |
| 162 | + "Address" |
| 163 | +FROM usa_house_prices |
| 164 | +WHERE "Address" = '1 Infinite Loop, Cupertino, California'; |
| 165 | +``` |
| 166 | + |
| 167 | +which produces |
| 168 | + |
| 169 | +``` |
| 170 | + Avg. Area House Age | Address |
| 171 | +---------------------+---------------------------------------- |
| 172 | + 43 | 1 Infinite Loop, Cupertino, California |
| 173 | +(1 row) |
| 174 | +``` |
| 175 | + |
| 176 | +which is exactly what we expected. Since we have a unique index on the table, we should only be getting one row back with that address. |
| 177 | + |
| 178 | +To ensure that Postgres is using an index when querying a table, we can ask it to produce the query execution plan that it's going to use before executing that query. A query plan is a list of steps that Postgres will take in order to get the query result we requested. |
| 179 | + |
| 180 | +To get the query plan for any query, prepend the keyword `EXPLAIN` to any query you're planning on running: |
| 181 | + |
| 182 | +``` |
| 183 | +postgresml=# EXPLAIN (FORMAT JSON) SELECT |
| 184 | + "Avg. Area House Age", |
| 185 | + "Address" |
| 186 | +FROM usa_house_prices |
| 187 | +WHERE "Address" = '1 Infinite Loop, Cupertino, California'; |
| 188 | +
|
| 189 | + QUERY PLAN |
| 190 | +---------------------------------------------------------------------------------------------- |
| 191 | + [ + |
| 192 | + { + |
| 193 | + "Plan": { + |
| 194 | + "Node Type": "Index Scan", + |
| 195 | + "Parallel Aware": false, + |
| 196 | + "Async Capable": false, + |
| 197 | + "Scan Direction": "Forward", + |
| 198 | + "Index Name": "usa_house_prices_Address_idx", + |
| 199 | + "Relation Name": "usa_house_prices", + |
| 200 | + "Alias": "usa_house_prices", + |
| 201 | + "Startup Cost": 0.28, + |
| 202 | + "Total Cost": 8.30, + |
| 203 | + "Plan Rows": 1, + |
| 204 | + "Plan Width": 51, + |
| 205 | + "Index Cond": "((\"Address\")::text = '1 Infinite Loop, Cupertino, California'::text)"+ |
| 206 | + } + |
| 207 | + } + |
| 208 | + ] |
| 209 | +``` |
| 210 | + |
| 211 | +The query plan indicates that it will be running an "Index Scan" using the index `usa_house_prices_Address_index` which is exactly what we want. |
| 212 | + |
| 213 | +The ability to create indexes on datasets of any size and to then efficiently query that data is what separates Postgres from most ad-hoc tools like Pandas and Arrow. Postgres can store and query datasets that would never be able to fit into memory and can do so quicker and more efficiently than most database systems currently used across the industry. |
| 214 | + |
| 215 | +#### Maintaining an index |
| 216 | + |
| 217 | +Indexes are automatically updated when new data is added and old data is removed. Postgres automatically ensures that indexes are efficiently organized and are ACID compliant. When using Postgres tables, the system guarantees that the data will always be consistent, no matter how many concurrent changes are made to the tables. |
0 commit comments